Test Report: KVM_Linux_crio 19312

                    
                      c58167e77f3b0efe0c3c561ff8e0552b34c41906:2024-07-22:35447
                    
                

Test fail (31/326)

Order failed test Duration
39 TestAddons/parallel/Ingress 151.91
41 TestAddons/parallel/MetricsServer 322.91
54 TestAddons/StoppedEnableDisable 154.4
157 TestFunctional/parallel/ImageCommands/ImageRemove 1.11
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.61
173 TestMultiControlPlane/serial/StopSecondaryNode 141.8
175 TestMultiControlPlane/serial/RestartSecondaryNode 60.07
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 359.86
180 TestMultiControlPlane/serial/StopCluster 141.42
240 TestMultiNode/serial/RestartKeepsNodes 324.54
242 TestMultiNode/serial/StopMultiNode 141.29
249 TestPreload 269.83
257 TestKubernetesUpgrade 381.65
334 TestStartStop/group/old-k8s-version/serial/FirstStart 287.22
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
362 TestStartStop/group/no-preload/serial/Stop 139.3
371 TestStartStop/group/embed-certs/serial/Stop 139.04
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
373 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
374 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 77.18
376 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
380 TestStartStop/group/old-k8s-version/serial/SecondStart 738.39
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.17
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.3
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.04
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.41
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 543.96
388 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 348.35
389 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 354.51
390 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 153.37
x
+
TestAddons/parallel/Ingress (151.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-688294 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-688294 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-688294 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0b5aca8f-8b07-4191-ba5e-991bdee098bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0b5aca8f-8b07-4191-ba5e-991bdee098bd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004242053s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-688294 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.984850904s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-688294 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.142
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 addons disable ingress-dns --alsologtostderr -v=1: (1.406321199s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 addons disable ingress --alsologtostderr -v=1: (7.652205285s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-688294 -n addons-688294
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 logs -n 25: (1.15733519s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-870595                                                                     | download-only-870595 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-825436                                                                     | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-576339                                                                     | download-only-576339 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-870595                                                                     | download-only-870595 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-302887 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | binary-mirror-302887                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36193                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-302887                                                                     | binary-mirror-302887 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| addons  | disable dashboard -p                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-688294 --wait=true                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:27 UTC | 21 Jul 24 23:27 UTC |
	|         | -p addons-688294                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | -p addons-688294                                                                            |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-688294 ip                                                                            | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-688294 ssh curl -s                                                                   | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-688294 ssh cat                                                                       | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | /opt/local-path-provisioner/pvc-46a377b6-b11e-4fc9-9633-78f2e49f996d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:29 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| addons  | addons-688294 addons                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-688294 addons                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-688294 ip                                                                            | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:25:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:25:33.362987   13262 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:25:33.363081   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:33.363088   13262 out.go:304] Setting ErrFile to fd 2...
	I0721 23:25:33.363093   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:33.363238   13262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:25:33.363820   13262 out.go:298] Setting JSON to false
	I0721 23:25:33.364593   13262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":477,"bootTime":1721603856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:25:33.364649   13262 start.go:139] virtualization: kvm guest
	I0721 23:25:33.366935   13262 out.go:177] * [addons-688294] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:25:33.368340   13262 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:25:33.368351   13262 notify.go:220] Checking for updates...
	I0721 23:25:33.370905   13262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:25:33.372338   13262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:25:33.373521   13262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:33.374884   13262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:25:33.376082   13262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:25:33.377423   13262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:25:33.407892   13262 out.go:177] * Using the kvm2 driver based on user configuration
	I0721 23:25:33.409017   13262 start.go:297] selected driver: kvm2
	I0721 23:25:33.409033   13262 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:25:33.409043   13262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:25:33.409651   13262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:33.409710   13262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:25:33.423454   13262 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:25:33.423499   13262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:25:33.423706   13262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:25:33.423746   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:25:33.423753   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:25:33.423763   13262 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:25:33.423821   13262 start.go:340] cluster config:
	{Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:25:33.423908   13262 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:33.425757   13262 out.go:177] * Starting "addons-688294" primary control-plane node in "addons-688294" cluster
	I0721 23:25:33.426813   13262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:33.426840   13262 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:25:33.426849   13262 cache.go:56] Caching tarball of preloaded images
	I0721 23:25:33.426925   13262 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:25:33.426938   13262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:25:33.427223   13262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json ...
	I0721 23:25:33.427242   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json: {Name:mka4e120652124e50c186dfd7958e54dc35e98eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:25:33.427403   13262 start.go:360] acquireMachinesLock for addons-688294: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:25:33.427460   13262 start.go:364] duration metric: took 40.193µs to acquireMachinesLock for "addons-688294"
	I0721 23:25:33.427483   13262 start.go:93] Provisioning new machine with config: &{Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:25:33.427538   13262 start.go:125] createHost starting for "" (driver="kvm2")
	I0721 23:25:33.429092   13262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0721 23:25:33.429215   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:25:33.429253   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:25:33.443407   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0721 23:25:33.443789   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:25:33.444311   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:25:33.444347   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:25:33.444666   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:25:33.444857   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:33.444992   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:33.445133   13262 start.go:159] libmachine.API.Create for "addons-688294" (driver="kvm2")
	I0721 23:25:33.445178   13262 client.go:168] LocalClient.Create starting
	I0721 23:25:33.445222   13262 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:25:33.521741   13262 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:25:33.595014   13262 main.go:141] libmachine: Running pre-create checks...
	I0721 23:25:33.595036   13262 main.go:141] libmachine: (addons-688294) Calling .PreCreateCheck
	I0721 23:25:33.595553   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:33.595996   13262 main.go:141] libmachine: Creating machine...
	I0721 23:25:33.596009   13262 main.go:141] libmachine: (addons-688294) Calling .Create
	I0721 23:25:33.596134   13262 main.go:141] libmachine: (addons-688294) Creating KVM machine...
	I0721 23:25:33.597178   13262 main.go:141] libmachine: (addons-688294) DBG | found existing default KVM network
	I0721 23:25:33.597905   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.597774   13284 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0721 23:25:33.597948   13262 main.go:141] libmachine: (addons-688294) DBG | created network xml: 
	I0721 23:25:33.597967   13262 main.go:141] libmachine: (addons-688294) DBG | <network>
	I0721 23:25:33.597999   13262 main.go:141] libmachine: (addons-688294) DBG |   <name>mk-addons-688294</name>
	I0721 23:25:33.598009   13262 main.go:141] libmachine: (addons-688294) DBG |   <dns enable='no'/>
	I0721 23:25:33.598015   13262 main.go:141] libmachine: (addons-688294) DBG |   
	I0721 23:25:33.598025   13262 main.go:141] libmachine: (addons-688294) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0721 23:25:33.598033   13262 main.go:141] libmachine: (addons-688294) DBG |     <dhcp>
	I0721 23:25:33.598039   13262 main.go:141] libmachine: (addons-688294) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0721 23:25:33.598044   13262 main.go:141] libmachine: (addons-688294) DBG |     </dhcp>
	I0721 23:25:33.598049   13262 main.go:141] libmachine: (addons-688294) DBG |   </ip>
	I0721 23:25:33.598053   13262 main.go:141] libmachine: (addons-688294) DBG |   
	I0721 23:25:33.598058   13262 main.go:141] libmachine: (addons-688294) DBG | </network>
	I0721 23:25:33.598064   13262 main.go:141] libmachine: (addons-688294) DBG | 
	I0721 23:25:33.603212   13262 main.go:141] libmachine: (addons-688294) DBG | trying to create private KVM network mk-addons-688294 192.168.39.0/24...
	I0721 23:25:33.666394   13262 main.go:141] libmachine: (addons-688294) DBG | private KVM network mk-addons-688294 192.168.39.0/24 created
	I0721 23:25:33.666418   13262 main.go:141] libmachine: (addons-688294) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 ...
	I0721 23:25:33.666435   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.666348   13284 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:33.666458   13262 main.go:141] libmachine: (addons-688294) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:25:33.666485   13262 main.go:141] libmachine: (addons-688294) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:25:33.917964   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.917842   13284 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa...
	I0721 23:25:34.048910   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:34.048732   13284 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/addons-688294.rawdisk...
	I0721 23:25:34.048936   13262 main.go:141] libmachine: (addons-688294) DBG | Writing magic tar header
	I0721 23:25:34.048945   13262 main.go:141] libmachine: (addons-688294) DBG | Writing SSH key tar header
	I0721 23:25:34.048953   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:34.048876   13284 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 ...
	I0721 23:25:34.048964   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294
	I0721 23:25:34.048984   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:25:34.049000   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:34.049012   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 (perms=drwx------)
	I0721 23:25:34.049025   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:25:34.049032   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:25:34.049061   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:25:34.049073   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:25:34.049085   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:25:34.049102   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:25:34.049113   13262 main.go:141] libmachine: (addons-688294) Creating domain...
	I0721 23:25:34.049123   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:25:34.049130   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:25:34.049137   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home
	I0721 23:25:34.049145   13262 main.go:141] libmachine: (addons-688294) DBG | Skipping /home - not owner
	I0721 23:25:34.050083   13262 main.go:141] libmachine: (addons-688294) define libvirt domain using xml: 
	I0721 23:25:34.050112   13262 main.go:141] libmachine: (addons-688294) <domain type='kvm'>
	I0721 23:25:34.050120   13262 main.go:141] libmachine: (addons-688294)   <name>addons-688294</name>
	I0721 23:25:34.050126   13262 main.go:141] libmachine: (addons-688294)   <memory unit='MiB'>4000</memory>
	I0721 23:25:34.050131   13262 main.go:141] libmachine: (addons-688294)   <vcpu>2</vcpu>
	I0721 23:25:34.050135   13262 main.go:141] libmachine: (addons-688294)   <features>
	I0721 23:25:34.050141   13262 main.go:141] libmachine: (addons-688294)     <acpi/>
	I0721 23:25:34.050146   13262 main.go:141] libmachine: (addons-688294)     <apic/>
	I0721 23:25:34.050153   13262 main.go:141] libmachine: (addons-688294)     <pae/>
	I0721 23:25:34.050157   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050169   13262 main.go:141] libmachine: (addons-688294)   </features>
	I0721 23:25:34.050174   13262 main.go:141] libmachine: (addons-688294)   <cpu mode='host-passthrough'>
	I0721 23:25:34.050179   13262 main.go:141] libmachine: (addons-688294)   
	I0721 23:25:34.050185   13262 main.go:141] libmachine: (addons-688294)   </cpu>
	I0721 23:25:34.050190   13262 main.go:141] libmachine: (addons-688294)   <os>
	I0721 23:25:34.050197   13262 main.go:141] libmachine: (addons-688294)     <type>hvm</type>
	I0721 23:25:34.050202   13262 main.go:141] libmachine: (addons-688294)     <boot dev='cdrom'/>
	I0721 23:25:34.050211   13262 main.go:141] libmachine: (addons-688294)     <boot dev='hd'/>
	I0721 23:25:34.050234   13262 main.go:141] libmachine: (addons-688294)     <bootmenu enable='no'/>
	I0721 23:25:34.050252   13262 main.go:141] libmachine: (addons-688294)   </os>
	I0721 23:25:34.050259   13262 main.go:141] libmachine: (addons-688294)   <devices>
	I0721 23:25:34.050265   13262 main.go:141] libmachine: (addons-688294)     <disk type='file' device='cdrom'>
	I0721 23:25:34.050285   13262 main.go:141] libmachine: (addons-688294)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/boot2docker.iso'/>
	I0721 23:25:34.050296   13262 main.go:141] libmachine: (addons-688294)       <target dev='hdc' bus='scsi'/>
	I0721 23:25:34.050309   13262 main.go:141] libmachine: (addons-688294)       <readonly/>
	I0721 23:25:34.050323   13262 main.go:141] libmachine: (addons-688294)     </disk>
	I0721 23:25:34.050332   13262 main.go:141] libmachine: (addons-688294)     <disk type='file' device='disk'>
	I0721 23:25:34.050340   13262 main.go:141] libmachine: (addons-688294)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:25:34.050354   13262 main.go:141] libmachine: (addons-688294)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/addons-688294.rawdisk'/>
	I0721 23:25:34.050366   13262 main.go:141] libmachine: (addons-688294)       <target dev='hda' bus='virtio'/>
	I0721 23:25:34.050374   13262 main.go:141] libmachine: (addons-688294)     </disk>
	I0721 23:25:34.050385   13262 main.go:141] libmachine: (addons-688294)     <interface type='network'>
	I0721 23:25:34.050526   13262 main.go:141] libmachine: (addons-688294)       <source network='mk-addons-688294'/>
	I0721 23:25:34.050565   13262 main.go:141] libmachine: (addons-688294)       <model type='virtio'/>
	I0721 23:25:34.050582   13262 main.go:141] libmachine: (addons-688294)     </interface>
	I0721 23:25:34.050593   13262 main.go:141] libmachine: (addons-688294)     <interface type='network'>
	I0721 23:25:34.050624   13262 main.go:141] libmachine: (addons-688294)       <source network='default'/>
	I0721 23:25:34.050640   13262 main.go:141] libmachine: (addons-688294)       <model type='virtio'/>
	I0721 23:25:34.050651   13262 main.go:141] libmachine: (addons-688294)     </interface>
	I0721 23:25:34.050661   13262 main.go:141] libmachine: (addons-688294)     <serial type='pty'>
	I0721 23:25:34.050671   13262 main.go:141] libmachine: (addons-688294)       <target port='0'/>
	I0721 23:25:34.050681   13262 main.go:141] libmachine: (addons-688294)     </serial>
	I0721 23:25:34.050687   13262 main.go:141] libmachine: (addons-688294)     <console type='pty'>
	I0721 23:25:34.050701   13262 main.go:141] libmachine: (addons-688294)       <target type='serial' port='0'/>
	I0721 23:25:34.050725   13262 main.go:141] libmachine: (addons-688294)     </console>
	I0721 23:25:34.050745   13262 main.go:141] libmachine: (addons-688294)     <rng model='virtio'>
	I0721 23:25:34.050755   13262 main.go:141] libmachine: (addons-688294)       <backend model='random'>/dev/random</backend>
	I0721 23:25:34.050762   13262 main.go:141] libmachine: (addons-688294)     </rng>
	I0721 23:25:34.050778   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050788   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050797   13262 main.go:141] libmachine: (addons-688294)   </devices>
	I0721 23:25:34.050803   13262 main.go:141] libmachine: (addons-688294) </domain>
	I0721 23:25:34.050811   13262 main.go:141] libmachine: (addons-688294) 
	I0721 23:25:34.056668   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:55:e7:28 in network default
	I0721 23:25:34.057187   13262 main.go:141] libmachine: (addons-688294) Ensuring networks are active...
	I0721 23:25:34.057209   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:34.057844   13262 main.go:141] libmachine: (addons-688294) Ensuring network default is active
	I0721 23:25:34.058153   13262 main.go:141] libmachine: (addons-688294) Ensuring network mk-addons-688294 is active
	I0721 23:25:34.058898   13262 main.go:141] libmachine: (addons-688294) Getting domain xml...
	I0721 23:25:34.059566   13262 main.go:141] libmachine: (addons-688294) Creating domain...
	I0721 23:25:35.417351   13262 main.go:141] libmachine: (addons-688294) Waiting to get IP...
	I0721 23:25:35.418100   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:35.418461   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:35.418498   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:35.418451   13284 retry.go:31] will retry after 244.984124ms: waiting for machine to come up
	I0721 23:25:35.665004   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:35.665494   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:35.665537   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:35.665421   13284 retry.go:31] will retry after 350.812456ms: waiting for machine to come up
	I0721 23:25:36.017933   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.018350   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.018377   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.018294   13284 retry.go:31] will retry after 427.547876ms: waiting for machine to come up
	I0721 23:25:36.447874   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.448342   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.448377   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.448299   13284 retry.go:31] will retry after 508.437364ms: waiting for machine to come up
	I0721 23:25:36.957853   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.958168   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.958205   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.958127   13284 retry.go:31] will retry after 464.500826ms: waiting for machine to come up
	I0721 23:25:37.423770   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:37.424113   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:37.424136   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:37.424065   13284 retry.go:31] will retry after 754.05099ms: waiting for machine to come up
	I0721 23:25:38.181249   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:38.181690   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:38.181719   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:38.181638   13284 retry.go:31] will retry after 1.011173963s: waiting for machine to come up
	I0721 23:25:39.194108   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:39.194535   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:39.194569   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:39.194521   13284 retry.go:31] will retry after 1.205743617s: waiting for machine to come up
	I0721 23:25:40.401844   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:40.402201   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:40.402223   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:40.402151   13284 retry.go:31] will retry after 1.132035307s: waiting for machine to come up
	I0721 23:25:41.536536   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:41.536921   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:41.536947   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:41.536872   13284 retry.go:31] will retry after 2.169565885s: waiting for machine to come up
	I0721 23:25:43.708006   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:43.708394   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:43.708443   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:43.708364   13284 retry.go:31] will retry after 2.482734773s: waiting for machine to come up
	I0721 23:25:46.194027   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:46.194520   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:46.194544   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:46.194469   13284 retry.go:31] will retry after 2.973617951s: waiting for machine to come up
	I0721 23:25:49.170164   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:49.170530   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:49.170552   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:49.170498   13284 retry.go:31] will retry after 4.464588507s: waiting for machine to come up
	I0721 23:25:53.637069   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.637655   13262 main.go:141] libmachine: (addons-688294) Found IP for machine: 192.168.39.142
	I0721 23:25:53.637689   13262 main.go:141] libmachine: (addons-688294) Reserving static IP address...
	I0721 23:25:53.637703   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has current primary IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.637980   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find host DHCP lease matching {name: "addons-688294", mac: "52:54:00:58:13:11", ip: "192.168.39.142"} in network mk-addons-688294
	I0721 23:25:53.708516   13262 main.go:141] libmachine: (addons-688294) DBG | Getting to WaitForSSH function...
	I0721 23:25:53.708583   13262 main.go:141] libmachine: (addons-688294) Reserved static IP address: 192.168.39.142
	I0721 23:25:53.708600   13262 main.go:141] libmachine: (addons-688294) Waiting for SSH to be available...
	I0721 23:25:53.710621   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.710977   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.711003   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.711154   13262 main.go:141] libmachine: (addons-688294) DBG | Using SSH client type: external
	I0721 23:25:53.711182   13262 main.go:141] libmachine: (addons-688294) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa (-rw-------)
	I0721 23:25:53.711209   13262 main.go:141] libmachine: (addons-688294) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:25:53.711250   13262 main.go:141] libmachine: (addons-688294) DBG | About to run SSH command:
	I0721 23:25:53.711267   13262 main.go:141] libmachine: (addons-688294) DBG | exit 0
	I0721 23:25:53.846738   13262 main.go:141] libmachine: (addons-688294) DBG | SSH cmd err, output: <nil>: 
	I0721 23:25:53.846974   13262 main.go:141] libmachine: (addons-688294) KVM machine creation complete!
	I0721 23:25:53.847307   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:53.847872   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:53.848116   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:53.848275   13262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:25:53.848290   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:25:53.849611   13262 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:25:53.849625   13262 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:25:53.849631   13262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:25:53.849637   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:53.852238   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.852617   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.852645   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.852800   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:53.852983   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.853118   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.853232   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:53.853388   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:53.853646   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:53.853662   13262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:25:53.965659   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:25:53.965705   13262 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:25:53.965718   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:53.968428   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.968848   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.968874   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.968963   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:53.969177   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.969365   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.969540   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:53.969696   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:53.969858   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:53.969867   13262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:25:54.082831   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:25:54.082908   13262 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:25:54.082917   13262 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:25:54.082924   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.083145   13262 buildroot.go:166] provisioning hostname "addons-688294"
	I0721 23:25:54.083169   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.083323   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.085689   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.086017   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.086041   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.086167   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.086356   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.086537   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.086705   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.086856   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.087057   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.087071   13262 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-688294 && echo "addons-688294" | sudo tee /etc/hostname
	I0721 23:25:54.211308   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-688294
	
	I0721 23:25:54.211337   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.213753   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.214079   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.214107   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.214254   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.214463   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.214644   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.214794   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.214966   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.215189   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.215208   13262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-688294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-688294/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-688294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:25:54.335254   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:25:54.335290   13262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:25:54.335325   13262 buildroot.go:174] setting up certificates
	I0721 23:25:54.335344   13262 provision.go:84] configureAuth start
	I0721 23:25:54.335360   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.335660   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:54.337920   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.338309   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.338348   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.338497   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.340292   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.340599   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.340632   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.340702   13262 provision.go:143] copyHostCerts
	I0721 23:25:54.340783   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:25:54.340937   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:25:54.341011   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:25:54.341072   13262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.addons-688294 san=[127.0.0.1 192.168.39.142 addons-688294 localhost minikube]
	I0721 23:25:54.546661   13262 provision.go:177] copyRemoteCerts
	I0721 23:25:54.546714   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:25:54.546735   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.549037   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.549383   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.549417   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.549629   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.549838   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.550001   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.550109   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:54.636210   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:25:54.658477   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:25:54.679920   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:25:54.701150   13262 provision.go:87] duration metric: took 365.790069ms to configureAuth
	I0721 23:25:54.701176   13262 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:25:54.701408   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:25:54.701506   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.703970   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.704305   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.704338   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.704448   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.704626   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.704787   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.704914   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.705077   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.705263   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.705286   13262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:25:54.964962   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:25:54.964985   13262 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:25:54.964992   13262 main.go:141] libmachine: (addons-688294) Calling .GetURL
	I0721 23:25:54.966426   13262 main.go:141] libmachine: (addons-688294) DBG | Using libvirt version 6000000
	I0721 23:25:54.968741   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.969081   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.969109   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.969234   13262 main.go:141] libmachine: Docker is up and running!
	I0721 23:25:54.969247   13262 main.go:141] libmachine: Reticulating splines...
	I0721 23:25:54.969254   13262 client.go:171] duration metric: took 21.524065935s to LocalClient.Create
	I0721 23:25:54.969275   13262 start.go:167] duration metric: took 21.524142859s to libmachine.API.Create "addons-688294"
	I0721 23:25:54.969293   13262 start.go:293] postStartSetup for "addons-688294" (driver="kvm2")
	I0721 23:25:54.969305   13262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:25:54.969322   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:54.969547   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:25:54.969570   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.971881   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.972200   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.972218   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.972388   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.972554   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.972692   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.972797   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.060553   13262 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:25:55.064646   13262 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:25:55.064674   13262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:25:55.064743   13262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:25:55.064772   13262 start.go:296] duration metric: took 95.471001ms for postStartSetup
	I0721 23:25:55.064813   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:55.107523   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:55.110306   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.110661   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.110690   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.110928   13262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json ...
	I0721 23:25:55.169976   13262 start.go:128] duration metric: took 21.74242165s to createHost
	I0721 23:25:55.170015   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.173313   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.173672   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.173718   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.173870   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.174100   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.174275   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.174406   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.174634   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:55.174834   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:55.174846   13262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:25:55.287228   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721604355.262674363
	
	I0721 23:25:55.287250   13262 fix.go:216] guest clock: 1721604355.262674363
	I0721 23:25:55.287259   13262 fix.go:229] Guest: 2024-07-21 23:25:55.262674363 +0000 UTC Remote: 2024-07-21 23:25:55.16999872 +0000 UTC m=+21.837725633 (delta=92.675643ms)
	I0721 23:25:55.287283   13262 fix.go:200] guest clock delta is within tolerance: 92.675643ms
	I0721 23:25:55.287289   13262 start.go:83] releasing machines lock for "addons-688294", held for 21.859817716s
	I0721 23:25:55.287311   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.287564   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:55.290090   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.290437   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.290462   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.290682   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291117   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291301   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291383   13262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:25:55.291434   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.291537   13262 ssh_runner.go:195] Run: cat /version.json
	I0721 23:25:55.291562   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.294042   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294300   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294503   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.294529   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294651   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.294813   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.294814   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.294884   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294969   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.295019   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.295129   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.295207   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.295404   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.295566   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.375642   13262 ssh_runner.go:195] Run: systemctl --version
	I0721 23:25:55.423790   13262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:25:56.001801   13262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:25:56.007367   13262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:25:56.007434   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:25:56.023304   13262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:25:56.023338   13262 start.go:495] detecting cgroup driver to use...
	I0721 23:25:56.023397   13262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:25:56.039561   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:25:56.051900   13262 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:25:56.051946   13262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:25:56.064482   13262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:25:56.077385   13262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:25:56.187776   13262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:25:56.322445   13262 docker.go:233] disabling docker service ...
	I0721 23:25:56.322513   13262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:25:56.336618   13262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:25:56.348225   13262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:25:56.471141   13262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:25:56.599056   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:25:56.611867   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:25:56.628841   13262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:25:56.628905   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.638519   13262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:25:56.638581   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.648122   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.657478   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.666927   13262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:25:56.676578   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.685999   13262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.701302   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.710924   13262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:25:56.719583   13262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:25:56.719637   13262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:25:56.730999   13262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:25:56.739812   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:25:56.855701   13262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:25:56.988967   13262 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:25:56.989057   13262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:25:56.993275   13262 start.go:563] Will wait 60s for crictl version
	I0721 23:25:56.993369   13262 ssh_runner.go:195] Run: which crictl
	I0721 23:25:56.996719   13262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:25:57.033935   13262 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:25:57.034057   13262 ssh_runner.go:195] Run: crio --version
	I0721 23:25:57.060539   13262 ssh_runner.go:195] Run: crio --version
	I0721 23:25:57.088942   13262 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:25:57.090390   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:57.092913   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:57.093289   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:57.093315   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:57.093651   13262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:25:57.097474   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:25:57.109324   13262 kubeadm.go:883] updating cluster {Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:25:57.109451   13262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:57.109507   13262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:25:57.138158   13262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0721 23:25:57.138238   13262 ssh_runner.go:195] Run: which lz4
	I0721 23:25:57.141792   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 23:25:57.145491   13262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 23:25:57.145519   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0721 23:25:58.252915   13262 crio.go:462] duration metric: took 1.111140121s to copy over tarball
	I0721 23:25:58.252991   13262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 23:26:00.453629   13262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200606224s)
	I0721 23:26:00.453665   13262 crio.go:469] duration metric: took 2.200720769s to extract the tarball
	I0721 23:26:00.453675   13262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 23:26:00.495754   13262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:26:00.537230   13262 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:26:00.537255   13262 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:26:00.537264   13262 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.30.3 crio true true} ...
	I0721 23:26:00.537391   13262 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-688294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:26:00.537473   13262 ssh_runner.go:195] Run: crio config
	I0721 23:26:00.578905   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:26:00.578923   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:26:00.578932   13262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:26:00.578957   13262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-688294 NodeName:addons-688294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:26:00.579143   13262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-688294"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:26:00.579208   13262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:26:00.588452   13262 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:26:00.588517   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 23:26:00.597034   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0721 23:26:00.611832   13262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:26:00.626325   13262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0721 23:26:00.643052   13262 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0721 23:26:00.646647   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:26:00.657742   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:26:00.764511   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:26:00.779994   13262 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294 for IP: 192.168.39.142
	I0721 23:26:00.780010   13262 certs.go:194] generating shared ca certs ...
	I0721 23:26:00.780024   13262 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.780160   13262 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:26:00.916144   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt ...
	I0721 23:26:00.916179   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt: {Name:mk13f89e22caf5001d08863d12b0cbb363da5b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.916375   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key ...
	I0721 23:26:00.916391   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key: {Name:mkd5a701b56963d453c76ebba0190d75523b6b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.916506   13262 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:26:01.040049   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt ...
	I0721 23:26:01.040078   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt: {Name:mk56b5fbecd9bed1d6a729844440840ef853de54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.040262   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key ...
	I0721 23:26:01.040276   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key: {Name:mkb1fc6e8f2aa4018dca66106de7aad53ea9ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.040387   13262 certs.go:256] generating profile certs ...
	I0721 23:26:01.040444   13262 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key
	I0721 23:26:01.040459   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt with IP's: []
	I0721 23:26:01.143847   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt ...
	I0721 23:26:01.143881   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: {Name:mk502a02dd0545f610ec2430272e7dc34e6c9e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.144223   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key ...
	I0721 23:26:01.144248   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key: {Name:mkda774d18c002fe67c556b5bb5c0ea8990bdd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.144396   13262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e
	I0721 23:26:01.144416   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0721 23:26:01.262423   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e ...
	I0721 23:26:01.262453   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e: {Name:mkd0ddade9e48636d5652f3537abe938ddee8ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.262637   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e ...
	I0721 23:26:01.262652   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e: {Name:mk14da2e09af673932b7e7c0725f59d34b59d820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.262750   13262 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt
	I0721 23:26:01.262823   13262 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key
	I0721 23:26:01.262869   13262 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key
	I0721 23:26:01.262884   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt with IP's: []
	I0721 23:26:01.370707   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt ...
	I0721 23:26:01.370737   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt: {Name:mkc7806d29165ead30a6309d111a88af9f1dabdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.370912   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key ...
	I0721 23:26:01.370925   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key: {Name:mk893899356780b66e17d05b51227314e0191484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.371110   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:26:01.371144   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:26:01.371167   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:26:01.371192   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:26:01.371838   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:26:01.394875   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:26:01.416946   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:26:01.440095   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:26:01.481066   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0721 23:26:01.505081   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:26:01.526585   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:26:01.547703   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:26:01.568733   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:26:01.589751   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:26:01.604624   13262 ssh_runner.go:195] Run: openssl version
	I0721 23:26:01.609736   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:26:01.619191   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.623065   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.623109   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.628242   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:26:01.637878   13262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:26:01.641473   13262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:26:01.641520   13262 kubeadm.go:392] StartCluster: {Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:26:01.641610   13262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:26:01.641663   13262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:26:01.679671   13262 cri.go:89] found id: ""
	I0721 23:26:01.679738   13262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 23:26:01.688518   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 23:26:01.696817   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 23:26:01.705137   13262 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 23:26:01.705155   13262 kubeadm.go:157] found existing configuration files:
	
	I0721 23:26:01.705202   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0721 23:26:01.713122   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 23:26:01.713168   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 23:26:01.721297   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0721 23:26:01.729198   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 23:26:01.729245   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 23:26:01.737455   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0721 23:26:01.745372   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 23:26:01.745430   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 23:26:01.753636   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0721 23:26:01.761447   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 23:26:01.761494   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 23:26:01.769504   13262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 23:26:01.933098   13262 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 23:26:12.713285   13262 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0721 23:26:12.713351   13262 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 23:26:12.713428   13262 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 23:26:12.713514   13262 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 23:26:12.713656   13262 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 23:26:12.713739   13262 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 23:26:12.715666   13262 out.go:204]   - Generating certificates and keys ...
	I0721 23:26:12.715743   13262 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 23:26:12.715812   13262 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 23:26:12.715915   13262 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0721 23:26:12.716007   13262 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0721 23:26:12.716098   13262 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0721 23:26:12.716171   13262 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0721 23:26:12.716257   13262 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0721 23:26:12.716433   13262 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-688294 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0721 23:26:12.716518   13262 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0721 23:26:12.716634   13262 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-688294 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0721 23:26:12.716690   13262 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0721 23:26:12.716749   13262 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0721 23:26:12.716788   13262 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0721 23:26:12.716837   13262 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 23:26:12.716900   13262 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 23:26:12.716978   13262 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0721 23:26:12.717036   13262 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 23:26:12.717104   13262 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 23:26:12.717158   13262 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 23:26:12.717253   13262 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 23:26:12.717353   13262 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 23:26:12.718797   13262 out.go:204]   - Booting up control plane ...
	I0721 23:26:12.718891   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 23:26:12.719010   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 23:26:12.719103   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 23:26:12.719235   13262 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 23:26:12.719311   13262 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 23:26:12.719358   13262 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 23:26:12.719525   13262 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0721 23:26:12.719618   13262 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0721 23:26:12.719702   13262 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00119796s
	I0721 23:26:12.719794   13262 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0721 23:26:12.719877   13262 kubeadm.go:310] [api-check] The API server is healthy after 5.001951151s
	I0721 23:26:12.720002   13262 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 23:26:12.720122   13262 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 23:26:12.720189   13262 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 23:26:12.720362   13262 kubeadm.go:310] [mark-control-plane] Marking the node addons-688294 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 23:26:12.720440   13262 kubeadm.go:310] [bootstrap-token] Using token: b18roa.jlvyrt5y4dz1vq43
	I0721 23:26:12.722528   13262 out.go:204]   - Configuring RBAC rules ...
	I0721 23:26:12.722675   13262 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 23:26:12.722786   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 23:26:12.722932   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 23:26:12.723075   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 23:26:12.723206   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 23:26:12.723312   13262 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 23:26:12.723456   13262 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 23:26:12.723519   13262 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 23:26:12.723589   13262 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 23:26:12.723604   13262 kubeadm.go:310] 
	I0721 23:26:12.723662   13262 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 23:26:12.723668   13262 kubeadm.go:310] 
	I0721 23:26:12.723758   13262 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 23:26:12.723771   13262 kubeadm.go:310] 
	I0721 23:26:12.723811   13262 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 23:26:12.723874   13262 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 23:26:12.723946   13262 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 23:26:12.723956   13262 kubeadm.go:310] 
	I0721 23:26:12.724023   13262 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 23:26:12.724030   13262 kubeadm.go:310] 
	I0721 23:26:12.724077   13262 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 23:26:12.724091   13262 kubeadm.go:310] 
	I0721 23:26:12.724166   13262 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 23:26:12.724264   13262 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 23:26:12.724354   13262 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 23:26:12.724363   13262 kubeadm.go:310] 
	I0721 23:26:12.724467   13262 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 23:26:12.724566   13262 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 23:26:12.724574   13262 kubeadm.go:310] 
	I0721 23:26:12.724684   13262 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b18roa.jlvyrt5y4dz1vq43 \
	I0721 23:26:12.724801   13262 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0721 23:26:12.724832   13262 kubeadm.go:310] 	--control-plane 
	I0721 23:26:12.724841   13262 kubeadm.go:310] 
	I0721 23:26:12.724951   13262 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 23:26:12.724960   13262 kubeadm.go:310] 
	I0721 23:26:12.725054   13262 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b18roa.jlvyrt5y4dz1vq43 \
	I0721 23:26:12.725165   13262 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0721 23:26:12.725175   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:26:12.725181   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:26:12.727267   13262 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 23:26:12.728441   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 23:26:12.738324   13262 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 23:26:12.756061   13262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 23:26:12.756137   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-688294 minikube.k8s.io/updated_at=2024_07_21T23_26_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=addons-688294 minikube.k8s.io/primary=true
	I0721 23:26:12.756140   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:12.773146   13262 ops.go:34] apiserver oom_adj: -16
	I0721 23:26:12.880510   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:13.380756   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:13.881396   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:14.380973   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:14.880589   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:15.381294   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:15.880665   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:16.381287   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:16.881293   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:17.380571   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:17.881459   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:18.381067   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:18.881210   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:19.381160   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:19.881367   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:20.381228   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:20.880973   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:21.381263   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:21.880583   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:22.380931   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:22.881129   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:23.381226   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:23.880844   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:24.380559   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:24.881447   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:25.381473   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:25.880636   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:26.380612   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:26.471530   13262 kubeadm.go:1113] duration metric: took 13.715441733s to wait for elevateKubeSystemPrivileges
	I0721 23:26:26.471557   13262 kubeadm.go:394] duration metric: took 24.8300396s to StartCluster
	I0721 23:26:26.471576   13262 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:26.471703   13262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:26:26.472110   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:26.472298   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0721 23:26:26.472345   13262 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:26:26.472389   13262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0721 23:26:26.472474   13262 addons.go:69] Setting yakd=true in profile "addons-688294"
	I0721 23:26:26.472501   13262 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-688294"
	I0721 23:26:26.472514   13262 addons.go:69] Setting helm-tiller=true in profile "addons-688294"
	I0721 23:26:26.472531   13262 addons.go:234] Setting addon yakd=true in "addons-688294"
	I0721 23:26:26.472535   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-688294"
	I0721 23:26:26.472538   13262 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-688294"
	I0721 23:26:26.472547   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:26:26.472553   13262 addons.go:234] Setting addon helm-tiller=true in "addons-688294"
	I0721 23:26:26.472563   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472570   13262 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-688294"
	I0721 23:26:26.472597   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472597   13262 addons.go:69] Setting volcano=true in profile "addons-688294"
	I0721 23:26:26.472604   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472618   13262 addons.go:234] Setting addon volcano=true in "addons-688294"
	I0721 23:26:26.472638   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472683   13262 addons.go:69] Setting storage-provisioner=true in profile "addons-688294"
	I0721 23:26:26.472496   13262 addons.go:69] Setting cloud-spanner=true in profile "addons-688294"
	I0721 23:26:26.472703   13262 addons.go:234] Setting addon storage-provisioner=true in "addons-688294"
	I0721 23:26:26.472718   13262 addons.go:234] Setting addon cloud-spanner=true in "addons-688294"
	I0721 23:26:26.472723   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472737   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473021   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473037   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473050   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473052   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473064   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473064   13262 addons.go:69] Setting volumesnapshots=true in profile "addons-688294"
	I0721 23:26:26.473071   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473076   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473084   13262 addons.go:234] Setting addon volumesnapshots=true in "addons-688294"
	I0721 23:26:26.473103   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473103   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.472510   13262 addons.go:69] Setting gcp-auth=true in profile "addons-688294"
	I0721 23:26:26.472506   13262 addons.go:69] Setting default-storageclass=true in profile "addons-688294"
	I0721 23:26:26.473139   13262 mustload.go:65] Loading cluster: addons-688294
	I0721 23:26:26.473145   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-688294"
	I0721 23:26:26.473287   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:26:26.472493   13262 addons.go:69] Setting registry=true in profile "addons-688294"
	I0721 23:26:26.473369   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473383   13262 addons.go:234] Setting addon registry=true in "addons-688294"
	I0721 23:26:26.472501   13262 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-688294"
	I0721 23:26:26.473393   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473411   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473420   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473439   13262 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-688294"
	I0721 23:26:26.473385   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473123   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472528   13262 addons.go:69] Setting metrics-server=true in profile "addons-688294"
	I0721 23:26:26.473549   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473552   13262 addons.go:234] Setting addon metrics-server=true in "addons-688294"
	I0721 23:26:26.473564   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473609   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473626   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472518   13262 addons.go:69] Setting inspektor-gadget=true in profile "addons-688294"
	I0721 23:26:26.473691   13262 addons.go:234] Setting addon inspektor-gadget=true in "addons-688294"
	I0721 23:26:26.472481   13262 addons.go:69] Setting ingress=true in profile "addons-688294"
	I0721 23:26:26.473736   13262 addons.go:234] Setting addon ingress=true in "addons-688294"
	I0721 23:26:26.473054   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473756   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473782   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472513   13262 addons.go:69] Setting ingress-dns=true in profile "addons-688294"
	I0721 23:26:26.473875   13262 addons.go:234] Setting addon ingress-dns=true in "addons-688294"
	I0721 23:26:26.473915   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473929   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473953   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473916   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474050   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474098   13262 out.go:177] * Verifying Kubernetes components...
	I0721 23:26:26.474011   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474463   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474494   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474531   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474649   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474705   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474720   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474759   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474850   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474871   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.475532   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:26:26.493934   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0721 23:26:26.494159   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0721 23:26:26.495040   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.495202   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.495265   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46375
	I0721 23:26:26.495353   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0721 23:26:26.499182   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.499237   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.505458   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.505485   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.505669   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0721 23:26:26.505942   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.505962   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.506215   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.506301   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.506801   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.506820   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.506844   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.507307   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.507332   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.507368   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.507407   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.507669   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.507760   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.508336   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.508339   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.508373   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.508805   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.508822   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.509071   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.509091   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.509442   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.509653   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.510356   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0721 23:26:26.512066   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.512651   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.512675   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.514725   13262 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-688294"
	I0721 23:26:26.514766   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.515124   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.515142   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.519005   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.520042   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.520067   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.520423   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.520586   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.522249   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.522654   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.522674   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.534633   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0721 23:26:26.534808   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0721 23:26:26.535191   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.535689   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.535712   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.536040   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.536205   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.539001   13262 addons.go:234] Setting addon default-storageclass=true in "addons-688294"
	I0721 23:26:26.539043   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.539404   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.539440   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.539654   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0721 23:26:26.539797   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.542489   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34725
	I0721 23:26:26.542508   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0721 23:26:26.542498   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0721 23:26:26.542643   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0721 23:26:26.542657   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.542678   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.543106   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.543180   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.543199   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.543759   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.543800   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.544004   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.544103   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.544116   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.544462   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.544476   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.544536   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.545103   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.545144   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.545353   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.545432   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.545446   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.545811   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.545831   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.545880   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.545893   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.546333   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.546368   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.546652   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.546679   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.546752   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.546785   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.547347   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.547381   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.547911   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.547935   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.548406   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.549014   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.549047   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.555366   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0721 23:26:26.555887   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.556919   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.556937   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.557253   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.557829   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.557870   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.558059   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0721 23:26:26.558523   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.559237   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.559256   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.559523   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43653
	I0721 23:26:26.559852   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.560257   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.560275   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.560106   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.560617   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.560988   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.561023   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.561345   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0721 23:26:26.562003   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.562038   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.564908   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0721 23:26:26.565361   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.565854   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.565871   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.566143   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.566577   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.566625   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.566843   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0721 23:26:26.566944   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0721 23:26:26.567283   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.567355   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.567863   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.567884   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.568020   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.568031   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.568207   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.568424   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.568443   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.569020   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.570535   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.571061   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.572598   13262 out.go:177]   - Using image docker.io/registry:2.8.3
	I0721 23:26:26.572699   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0721 23:26:26.572737   13262 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0721 23:26:26.573097   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.573603   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.573621   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.573831   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0721 23:26:26.573846   13262 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0721 23:26:26.573863   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.573915   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.574088   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.574853   13262 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0721 23:26:26.576549   13262 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0721 23:26:26.576566   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0721 23:26:26.576584   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.577155   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I0721 23:26:26.577392   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.578074   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.578708   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.579483   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.579501   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.579562   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.579906   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.580323   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.580334   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.580349   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.580361   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.580496   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.580637   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.580765   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.581178   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.581239   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.581456   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.581619   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.581782   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.581910   13262 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0721 23:26:26.582020   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.582523   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.583141   13262 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:26:26.583156   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0721 23:26:26.583171   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.584748   13262 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0721 23:26:26.586331   13262 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:26:26.586351   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0721 23:26:26.586370   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.586547   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.587285   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.587304   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.587454   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.587624   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.587849   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.588462   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.589639   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.590228   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.590228   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.590268   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.590378   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.590505   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.590816   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.595496   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.596129   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.596148   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.596577   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.596801   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.598568   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0721 23:26:26.598979   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.599494   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.599541   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.599941   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.600372   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.600429   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0721 23:26:26.600563   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37177
	I0721 23:26:26.601311   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I0721 23:26:26.601724   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.602133   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0721 23:26:26.602663   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0721 23:26:26.602778   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603049   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603105   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.603125   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603184   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603727   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603743   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.603817   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603940   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603951   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.604002   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.604041   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604267   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604290   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604387   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.604404   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.604267   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.604436   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.604836   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.604871   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.605685   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.605850   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.606045   13262 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0721 23:26:26.606271   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.606393   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.606415   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.606992   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.606798   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.607465   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.607705   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:26.607719   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:26.607908   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:26.607918   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:26.607926   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:26.607934   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:26.608187   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:26.608216   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:26.608224   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	W0721 23:26:26.608303   13262 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0721 23:26:26.608426   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0721 23:26:26.608433   13262 out.go:177]   - Using image docker.io/busybox:stable
	I0721 23:26:26.608480   13262 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0721 23:26:26.608495   13262 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0721 23:26:26.608761   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.609691   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.609705   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.609779   13262 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:26:26.609797   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0721 23:26:26.609814   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.609956   13262 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0721 23:26:26.609971   13262 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0721 23:26:26.609987   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.610040   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.610209   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.610349   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0721 23:26:26.610359   13262 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0721 23:26:26.610373   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.610637   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0721 23:26:26.610949   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.611410   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.611425   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.611635   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.612918   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.614129   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.614862   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.615278   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.615308   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.615279   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0721 23:26:26.615546   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.615806   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.615847   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.616048   13262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 23:26:26.616215   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.616727   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.616746   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.616917   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0721 23:26:26.616936   13262 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0721 23:26:26.616955   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.617005   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.617701   13262 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:26:26.617718   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 23:26:26.617733   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.617733   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.617708   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0721 23:26:26.617797   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.618022   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.618043   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.617936   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.618072   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.618095   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.618240   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.618486   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.618620   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.618636   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.618791   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.618836   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.618966   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.619079   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.619134   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.618759   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.620077   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.620542   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.620803   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.621559   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.621700   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.621742   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.621930   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.622108   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.622163   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.622305   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.622665   13262 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0721 23:26:26.623174   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.623629   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.623646   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.623840   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.624056   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.624215   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.624350   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.624406   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0721 23:26:26.624669   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0721 23:26:26.624680   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0721 23:26:26.624691   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.627086   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.627130   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:26.627551   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.627579   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.627774   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.627931   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.628130   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.628268   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.629330   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:26.630447   13262 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:26:26.630463   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0721 23:26:26.630477   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.631495   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0721 23:26:26.631891   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.631950   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35693
	I0721 23:26:26.632414   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.632526   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.632548   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.632849   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.632872   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.632877   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.633050   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.633243   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.633381   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.633432   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.634002   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.634023   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.634259   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.634404   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.634497   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0721 23:26:26.634646   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.634710   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.634820   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.634928   13262 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 23:26:26.634951   13262 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 23:26:26.634966   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.634930   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.635342   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.635474   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.635490   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.636043   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.636245   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.636895   13262 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0721 23:26:26.637919   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.638109   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.638124   13262 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0721 23:26:26.638137   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0721 23:26:26.638152   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.638458   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.638482   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.638659   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.638787   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.638924   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.639020   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.639331   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0721 23:26:26.640879   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0721 23:26:26.640958   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.641327   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.641365   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.641501   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.641659   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.641803   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.641944   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.643069   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0721 23:26:26.644430   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0721 23:26:26.645587   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0721 23:26:26.646629   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0721 23:26:26.647660   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0721 23:26:26.648640   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0721 23:26:26.649677   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0721 23:26:26.649697   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0721 23:26:26.649729   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.652011   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.652335   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.652378   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.652500   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.652682   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.652838   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.652966   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.941798   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:26:26.941898   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0721 23:26:27.062799   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0721 23:26:27.062822   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0721 23:26:27.076780   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0721 23:26:27.076800   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0721 23:26:27.117270   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0721 23:26:27.117290   13262 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0721 23:26:27.120449   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0721 23:26:27.121771   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 23:26:27.130362   13262 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0721 23:26:27.130383   13262 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0721 23:26:27.152349   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:26:27.157160   13262 node_ready.go:35] waiting up to 6m0s for node "addons-688294" to be "Ready" ...
	I0721 23:26:27.159738   13262 node_ready.go:49] node "addons-688294" has status "Ready":"True"
	I0721 23:26:27.159755   13262 node_ready.go:38] duration metric: took 2.571307ms for node "addons-688294" to be "Ready" ...
	I0721 23:26:27.159763   13262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:26:27.165456   13262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:27.171925   13262 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0721 23:26:27.171940   13262 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0721 23:26:27.178595   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:26:27.222825   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0721 23:26:27.222854   13262 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0721 23:26:27.223252   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:26:27.229565   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0721 23:26:27.229581   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0721 23:26:27.267189   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:26:27.314262   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:26:27.332947   13262 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0721 23:26:27.332968   13262 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0721 23:26:27.339098   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0721 23:26:27.339115   13262 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0721 23:26:27.339495   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0721 23:26:27.339508   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0721 23:26:27.350012   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0721 23:26:27.350029   13262 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0721 23:26:27.356834   13262 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:26:27.356848   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0721 23:26:27.415202   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0721 23:26:27.415228   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0721 23:26:27.429474   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:26:27.429496   13262 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0721 23:26:27.510530   13262 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0721 23:26:27.510566   13262 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0721 23:26:27.520849   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:26:27.520868   13262 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0721 23:26:27.552669   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0721 23:26:27.552689   13262 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0721 23:26:27.555152   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0721 23:26:27.555181   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0721 23:26:27.568265   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:26:27.619240   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0721 23:26:27.619266   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0721 23:26:27.652926   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:26:27.660185   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:26:27.719997   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0721 23:26:27.720028   13262 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0721 23:26:27.745204   13262 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0721 23:26:27.745228   13262 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0721 23:26:27.804249   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0721 23:26:27.804271   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0721 23:26:27.819632   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:26:27.819650   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0721 23:26:27.885452   13262 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:27.885478   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0721 23:26:27.920073   13262 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0721 23:26:27.920098   13262 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0721 23:26:27.995955   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0721 23:26:27.995978   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0721 23:26:28.133491   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:26:28.166251   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:28.250449   13262 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0721 23:26:28.250476   13262 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0721 23:26:28.279181   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0721 23:26:28.279201   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0721 23:26:28.510129   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0721 23:26:28.510152   13262 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0721 23:26:28.554887   13262 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:26:28.554906   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0721 23:26:28.755137   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0721 23:26:28.755172   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0721 23:26:28.794772   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:26:28.886681   13262 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.944747392s)
	I0721 23:26:28.886710   13262 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0721 23:26:28.989002   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0721 23:26:28.989024   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0721 23:26:29.177747   13262 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"False"
	I0721 23:26:29.241796   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:26:29.241824   13262 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0721 23:26:29.320569   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.198768294s)
	I0721 23:26:29.320627   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320637   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.320705   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.200216529s)
	I0721 23:26:29.320748   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320764   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.320917   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.320961   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.320969   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.320985   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320991   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.321007   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.321042   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.321050   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.321063   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.321071   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.321268   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.321281   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.322735   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.322753   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.322790   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.348919   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.348941   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.349217   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.349242   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.349249   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.390162   13262 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-688294" context rescaled to 1 replicas
	I0721 23:26:29.416536   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:26:31.193415   13262 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"False"
	I0721 23:26:31.424055   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.271673317s)
	I0721 23:26:31.424072   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.245414703s)
	I0721 23:26:31.424105   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424105   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424116   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424118   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424119   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.200836462s)
	I0721 23:26:31.424339   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424364   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424499   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424542   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424552   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424550   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.424597   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424599   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424624   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.424636   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424648   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424655   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424606   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424839   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424873   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424890   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.425032   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.425052   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.425059   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.426653   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.426667   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.426676   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.426683   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.426897   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.426927   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.426936   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.505375   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.505399   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.505665   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.505686   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.505718   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.699599   13262 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.699624   13262 pod_ready.go:81] duration metric: took 4.534145821s for pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.699637   13262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.723008   13262 pod_ready.go:92] pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.723027   13262 pod_ready.go:81] duration metric: took 23.384884ms for pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.723037   13262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.751710   13262 pod_ready.go:92] pod "etcd-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.751730   13262 pod_ready.go:81] duration metric: took 28.687782ms for pod "etcd-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.751739   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.813970   13262 pod_ready.go:92] pod "kube-apiserver-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.813989   13262 pod_ready.go:81] duration metric: took 62.243947ms for pod "kube-apiserver-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.813998   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.913601   13262 pod_ready.go:92] pod "kube-controller-manager-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.913629   13262 pod_ready.go:81] duration metric: took 99.623509ms for pod "kube-controller-manager-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.913643   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jcqpx" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.068960   13262 pod_ready.go:92] pod "kube-proxy-jcqpx" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:32.068982   13262 pod_ready.go:81] duration metric: took 155.331037ms for pod "kube-proxy-jcqpx" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.068991   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.471239   13262 pod_ready.go:92] pod "kube-scheduler-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:32.471263   13262 pod_ready.go:81] duration metric: took 402.264753ms for pod "kube-scheduler-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.471273   13262 pod_ready.go:38] duration metric: took 5.311498447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:26:32.471291   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:26:32.471358   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:26:33.682884   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0721 23:26:33.682926   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:33.686394   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:33.686864   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:33.686907   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:33.687125   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:33.687330   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:33.687488   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:33.687651   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:33.919910   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0721 23:26:33.963095   13262 addons.go:234] Setting addon gcp-auth=true in "addons-688294"
	I0721 23:26:33.963142   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:33.963423   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:33.963451   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:33.979127   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0721 23:26:33.979688   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:33.980128   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:33.980152   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:33.980560   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:33.981042   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:33.981081   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:33.995402   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0721 23:26:33.995879   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:33.996416   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:33.996438   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:33.996762   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:33.996953   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:33.998380   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:33.998638   13262 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0721 23:26:33.998669   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:34.001509   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:34.001944   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:34.001972   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:34.002112   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:34.002286   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:34.002464   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:34.002594   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:34.457904   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.190672777s)
	I0721 23:26:34.457959   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.457960   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.143666176s)
	I0721 23:26:34.457973   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.457994   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458008   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.457994   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.88969924s)
	I0721 23:26:34.458054   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458064   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.805102613s)
	I0721 23:26:34.458071   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458131   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.797906302s)
	I0721 23:26:34.458150   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458092   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458162   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458185   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458247   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.324726481s)
	I0721 23:26:34.458264   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458273   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458347   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.292060025s)
	W0721 23:26:34.458393   13262 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:26:34.458421   13262 retry.go:31] will retry after 287.426306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:26:34.458513   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.663707627s)
	I0721 23:26:34.458534   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458543   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458758   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458780   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458793   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458801   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458807   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458808   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458820   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458828   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458835   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458785   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458881   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458887   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458894   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458901   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458935   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458951   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458957   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458964   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458971   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459004   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459019   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459028   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459035   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459042   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459271   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459303   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459313   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459321   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459330   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459390   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459419   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459424   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459430   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459436   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.461076   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461108   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461114   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461118   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461136   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461142   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461230   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461248   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461258   13262 addons.go:475] Verifying addon metrics-server=true in "addons-688294"
	I0721 23:26:34.461264   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461312   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461319   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461326   13262 addons.go:475] Verifying addon ingress=true in "addons-688294"
	I0721 23:26:34.461647   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461686   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461695   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461741   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461752   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461248   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461975   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461984   13262 addons.go:475] Verifying addon registry=true in "addons-688294"
	I0721 23:26:34.463163   13262 out.go:177] * Verifying ingress addon...
	I0721 23:26:34.463807   13262 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-688294 service yakd-dashboard -n yakd-dashboard
	
	I0721 23:26:34.463821   13262 out.go:177] * Verifying registry addon...
	I0721 23:26:34.465267   13262 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0721 23:26:34.466010   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0721 23:26:34.486007   13262 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0721 23:26:34.486036   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:34.498761   13262 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0721 23:26:34.498786   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:34.746521   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:34.977174   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:34.984565   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.206172   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.789587897s)
	I0721 23:26:35.206227   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:35.206248   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:35.206277   13262 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.207620315s)
	I0721 23:26:35.206234   13262 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.734851735s)
	I0721 23:26:35.206331   13262 api_server.go:72] duration metric: took 8.733952881s to wait for apiserver process to appear ...
	I0721 23:26:35.206346   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:26:35.206369   13262 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0721 23:26:35.206558   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:35.206620   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:35.206637   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:35.206654   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:35.206681   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:35.207138   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:35.207174   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:35.207189   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:35.207203   13262 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-688294"
	I0721 23:26:35.207908   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:35.208732   13262 out.go:177] * Verifying csi-hostpath-driver addon...
	I0721 23:26:35.210120   13262 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0721 23:26:35.210884   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0721 23:26:35.211139   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0721 23:26:35.211160   13262 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0721 23:26:35.222500   13262 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0721 23:26:35.223794   13262 api_server.go:141] control plane version: v1.30.3
	I0721 23:26:35.223816   13262 api_server.go:131] duration metric: took 17.462329ms to wait for apiserver health ...
	I0721 23:26:35.223825   13262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:26:35.238659   13262 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0721 23:26:35.238679   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:35.265571   13262 system_pods.go:59] 19 kube-system pods found
	I0721 23:26:35.265598   13262 system_pods.go:61] "coredns-7db6d8ff4d-gjb75" [c86d3c78-58cc-447e-a5c9-52d4e4a20e1a] Running
	I0721 23:26:35.265603   13262 system_pods.go:61] "coredns-7db6d8ff4d-wxvm9" [2a1974fc-f711-4ee3-9ea9-0950557b6591] Running
	I0721 23:26:35.265609   13262 system_pods.go:61] "csi-hostpath-attacher-0" [40077e94-802d-420a-b455-ab737983b277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0721 23:26:35.265613   13262 system_pods.go:61] "csi-hostpath-resizer-0" [aecffca5-4e9e-4b3a-aa94-26595456d158] Pending
	I0721 23:26:35.265621   13262 system_pods.go:61] "csi-hostpathplugin-h5wsx" [c86e378b-c880-4595-8d6e-08e01fb0245d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0721 23:26:35.265625   13262 system_pods.go:61] "etcd-addons-688294" [856f9d44-b0a3-4b78-8036-e0d7c246f307] Running
	I0721 23:26:35.265630   13262 system_pods.go:61] "kube-apiserver-addons-688294" [9f5dff41-7d2a-4999-b1e3-d4d5fb9b6df9] Running
	I0721 23:26:35.265634   13262 system_pods.go:61] "kube-controller-manager-addons-688294" [8f0e109c-e220-4b7a-a2a6-31276fab4267] Running
	I0721 23:26:35.265639   13262 system_pods.go:61] "kube-ingress-dns-minikube" [3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0721 23:26:35.265644   13262 system_pods.go:61] "kube-proxy-jcqpx" [03cc3bb7-95da-48e2-9f10-bbc947e4f3ee] Running
	I0721 23:26:35.265651   13262 system_pods.go:61] "kube-scheduler-addons-688294" [392d1358-a63c-49c0-8f9e-98ba38f0847c] Running
	I0721 23:26:35.265658   13262 system_pods.go:61] "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0721 23:26:35.265666   13262 system_pods.go:61] "nvidia-device-plugin-daemonset-mqmww" [8f13b775-6ef2-4604-a624-4a861b5001b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0721 23:26:35.265674   13262 system_pods.go:61] "registry-656c9c8d9c-f6bxb" [8ed372bf-f96f-42fa-a8f1-eddc6650451c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0721 23:26:35.265685   13262 system_pods.go:61] "registry-proxy-2gnkd" [a7a0e03d-5c29-4e30-9118-ff8299b7ca06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0721 23:26:35.265695   13262 system_pods.go:61] "snapshot-controller-745499f584-jhgrt" [3b4f303a-68fb-4d26-bdf5-dfe540adffc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.265708   13262 system_pods.go:61] "snapshot-controller-745499f584-mc4vn" [ff9546b7-95c6-4243-82bb-356750d46a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.265714   13262 system_pods.go:61] "storage-provisioner" [e698e282-1395-4fd6-a797-6a0eb40bbabc] Running
	I0721 23:26:35.265722   13262 system_pods.go:61] "tiller-deploy-6677d64bcd-7tqs9" [c6255c6f-8301-451a-905c-7aabaac5493c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0721 23:26:35.265730   13262 system_pods.go:74] duration metric: took 41.899202ms to wait for pod list to return data ...
	I0721 23:26:35.265739   13262 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:26:35.278634   13262 default_sa.go:45] found service account: "default"
	I0721 23:26:35.278660   13262 default_sa.go:55] duration metric: took 12.914679ms for default service account to be created ...
	I0721 23:26:35.278670   13262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:26:35.290715   13262 system_pods.go:86] 19 kube-system pods found
	I0721 23:26:35.290739   13262 system_pods.go:89] "coredns-7db6d8ff4d-gjb75" [c86d3c78-58cc-447e-a5c9-52d4e4a20e1a] Running
	I0721 23:26:35.290745   13262 system_pods.go:89] "coredns-7db6d8ff4d-wxvm9" [2a1974fc-f711-4ee3-9ea9-0950557b6591] Running
	I0721 23:26:35.290755   13262 system_pods.go:89] "csi-hostpath-attacher-0" [40077e94-802d-420a-b455-ab737983b277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0721 23:26:35.290761   13262 system_pods.go:89] "csi-hostpath-resizer-0" [aecffca5-4e9e-4b3a-aa94-26595456d158] Pending
	I0721 23:26:35.290778   13262 system_pods.go:89] "csi-hostpathplugin-h5wsx" [c86e378b-c880-4595-8d6e-08e01fb0245d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0721 23:26:35.290787   13262 system_pods.go:89] "etcd-addons-688294" [856f9d44-b0a3-4b78-8036-e0d7c246f307] Running
	I0721 23:26:35.290796   13262 system_pods.go:89] "kube-apiserver-addons-688294" [9f5dff41-7d2a-4999-b1e3-d4d5fb9b6df9] Running
	I0721 23:26:35.290801   13262 system_pods.go:89] "kube-controller-manager-addons-688294" [8f0e109c-e220-4b7a-a2a6-31276fab4267] Running
	I0721 23:26:35.290809   13262 system_pods.go:89] "kube-ingress-dns-minikube" [3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0721 23:26:35.290815   13262 system_pods.go:89] "kube-proxy-jcqpx" [03cc3bb7-95da-48e2-9f10-bbc947e4f3ee] Running
	I0721 23:26:35.290820   13262 system_pods.go:89] "kube-scheduler-addons-688294" [392d1358-a63c-49c0-8f9e-98ba38f0847c] Running
	I0721 23:26:35.290826   13262 system_pods.go:89] "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0721 23:26:35.290842   13262 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmww" [8f13b775-6ef2-4604-a624-4a861b5001b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0721 23:26:35.290854   13262 system_pods.go:89] "registry-656c9c8d9c-f6bxb" [8ed372bf-f96f-42fa-a8f1-eddc6650451c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0721 23:26:35.290867   13262 system_pods.go:89] "registry-proxy-2gnkd" [a7a0e03d-5c29-4e30-9118-ff8299b7ca06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0721 23:26:35.290907   13262 system_pods.go:89] "snapshot-controller-745499f584-jhgrt" [3b4f303a-68fb-4d26-bdf5-dfe540adffc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.290918   13262 system_pods.go:89] "snapshot-controller-745499f584-mc4vn" [ff9546b7-95c6-4243-82bb-356750d46a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.290925   13262 system_pods.go:89] "storage-provisioner" [e698e282-1395-4fd6-a797-6a0eb40bbabc] Running
	I0721 23:26:35.290932   13262 system_pods.go:89] "tiller-deploy-6677d64bcd-7tqs9" [c6255c6f-8301-451a-905c-7aabaac5493c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0721 23:26:35.290941   13262 system_pods.go:126] duration metric: took 12.26527ms to wait for k8s-apps to be running ...
	I0721 23:26:35.290953   13262 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:26:35.291009   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:26:35.348144   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0721 23:26:35.348166   13262 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0721 23:26:35.396624   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:26:35.396643   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0721 23:26:35.446797   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:26:35.470356   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:35.470637   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.718872   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:35.971783   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.972010   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.216166   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:36.299362   13262 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.008321294s)
	I0721 23:26:36.299381   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.552811786s)
	I0721 23:26:36.299401   13262 system_svc.go:56] duration metric: took 1.008444614s WaitForService to wait for kubelet
	I0721 23:26:36.299411   13262 kubeadm.go:582] duration metric: took 9.827035938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:26:36.299430   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.299439   13262 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:26:36.299447   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.299890   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.299910   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.299919   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.299928   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.300242   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.300264   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.300283   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:36.302799   13262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:26:36.302820   13262 node_conditions.go:123] node cpu capacity is 2
	I0721 23:26:36.302829   13262 node_conditions.go:105] duration metric: took 3.385045ms to run NodePressure ...
	I0721 23:26:36.302839   13262 start.go:241] waiting for startup goroutines ...
	I0721 23:26:36.503901   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:36.504507   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.749772   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:36.785906   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.33907151s)
	I0721 23:26:36.785981   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.786000   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.786254   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.786272   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.786283   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.786292   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.786508   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.786516   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:36.786525   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.788390   13262 addons.go:475] Verifying addon gcp-auth=true in "addons-688294"
	I0721 23:26:36.789894   13262 out.go:177] * Verifying gcp-auth addon...
	I0721 23:26:36.791877   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0721 23:26:36.838209   13262 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0721 23:26:36.838229   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:36.972170   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.974168   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.217753   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:37.295519   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:37.469966   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.471023   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:37.716522   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:37.795838   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:37.970797   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.971718   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.216682   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:38.298625   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:38.470593   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:38.470988   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.717740   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:38.795929   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:38.971439   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.971446   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:39.217135   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:39.294879   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:39.470241   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:39.470501   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:39.715880   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:39.795249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:39.971556   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:39.974334   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.216791   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:40.295027   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:40.471871   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:40.475493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.929131   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:40.930137   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:40.971195   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.972018   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.216640   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:41.295044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:41.471045   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.471453   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:41.715653   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:41.795675   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:41.969709   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.971389   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:42.221237   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:42.331310   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:42.471330   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:42.471559   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:42.719452   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:42.795961   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:42.969700   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:42.970769   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:43.251644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:43.296219   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:43.469587   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:43.471527   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:43.716224   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:43.795652   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:43.969693   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:43.970819   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:44.216281   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:44.295461   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:44.469374   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:44.470655   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:44.716626   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:44.796315   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:44.970483   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:44.970664   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:45.217280   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:45.295271   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:45.471355   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:45.471736   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:45.717034   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:45.795854   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:45.969942   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:45.972134   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.216149   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:46.295207   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:46.471756   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.472049   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:46.716238   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:46.795240   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:46.971580   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.971734   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.216740   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:47.295883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:47.469690   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.472260   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:47.716415   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:47.795659   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:47.970257   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.972481   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.216867   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:48.295276   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:48.471849   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.472002   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:48.716459   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:48.796076   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:48.974646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.974822   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:49.215956   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:49.294755   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:49.469361   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:49.471377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:49.717031   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:49.794924   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:49.971098   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:49.971694   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.216240   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:50.295914   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:50.471424   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:50.471609   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.719955   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:50.796369   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:50.971874   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.973265   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:51.216749   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:51.296176   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:51.470866   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:51.472368   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:51.716330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:51.795556   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:51.969198   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:51.971919   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:52.216938   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:52.295345   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:52.470813   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:52.470897   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:52.716354   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:52.795646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:52.970127   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:52.971798   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.217597   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:53.296237   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:53.469817   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:53.474498   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.716578   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:53.804889   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:53.971973   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.973925   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.225723   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:54.297257   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:54.471177   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:54.471870   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.715928   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:54.795370   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:54.971545   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.971867   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:55.216590   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:55.295953   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:55.470576   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:55.471044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:55.717086   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:55.800499   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:55.969801   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:55.972422   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:56.215704   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:56.295189   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:56.471074   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:56.473006   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:56.715765   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:56.796950   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:56.970788   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:56.973186   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:57.216353   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:57.295887   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:57.470672   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:57.471809   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:57.716036   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:57.795200   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:57.971219   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:57.971553   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:58.222015   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:58.295981   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:58.469632   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:58.471489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:58.716164   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:58.799232   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:58.972880   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:58.974318   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:59.215713   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:59.295827   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:59.470949   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:59.473251   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:59.716883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:59.801422   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:59.970170   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:59.971394   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:00.216070   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:00.295301   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:00.474941   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:00.475024   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:00.716314   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:00.795653   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:00.969619   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:00.971386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:01.217862   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:01.295489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:01.469934   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:01.472664   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:01.715898   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:01.795730   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:01.970171   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:01.970470   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.215957   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:02.295264   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:02.471978   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.472128   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:02.716840   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:02.796643   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:02.970646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.971628   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.217013   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:03.295381   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:03.471307   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.471953   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:03.717411   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:03.795404   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:03.969240   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.970538   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:04.216053   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:04.296086   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:04.472596   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:04.473420   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:04.716875   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:04.795934   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:04.970704   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:04.970711   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:05.216417   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:05.295758   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:05.470738   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:05.472177   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:05.716463   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:05.796777   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:05.969502   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:05.971351   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.216507   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:06.295108   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:06.471567   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:06.471834   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.716761   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:06.794903   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:06.971436   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.971723   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.216389   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:07.295829   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:07.469871   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.472386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:07.716051   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:07.795411   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:07.970200   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.971317   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.215782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:08.295261   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:08.469606   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:08.471513   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.719714   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:08.796523   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:08.971186   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.971224   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.218278   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:09.295187   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:09.470412   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.472012   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:09.716543   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:09.795507   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:09.971281   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.971478   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.215680   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:10.294961   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:10.470265   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:10.470350   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.715480   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:10.796035   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:10.972202   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.972508   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.216179   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:11.296713   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:11.469451   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.471070   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:11.716921   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:11.795760   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:11.970117   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.971644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.435057   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:12.437734   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:12.469591   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:12.472680   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.716496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:12.796332   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:12.971027   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.971153   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:13.218985   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:13.295883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:13.469847   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:13.472860   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:13.716649   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:13.794977   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:13.970727   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:13.970806   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:14.216605   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:14.295300   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:14.470052   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:14.470291   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:14.716152   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:14.796462   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:14.971502   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:14.971560   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:15.216720   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:15.295130   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:15.640612   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:15.641404   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:15.718330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:15.795810   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:15.971043   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:15.971211   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:16.216645   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:16.296694   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:16.469617   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:16.470833   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:16.716697   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:16.796346   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:16.970208   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:16.970216   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:17.215782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:17.295172   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:17.471023   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:17.472194   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:17.716658   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:17.794837   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:17.972431   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:17.973976   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.216849   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:18.296061   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:18.469736   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.471216   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:18.716435   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:18.796110   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:18.970212   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.970467   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.216267   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:19.295563   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:19.472297   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.472717   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:19.716125   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:19.795818   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:19.971038   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.971437   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.215636   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:20.296521   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:20.469309   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.470698   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:20.717299   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:20.795398   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:20.971455   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.971738   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:21.218971   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:21.296033   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:21.470138   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:21.472089   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:21.715514   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:21.795698   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:21.969507   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:21.971512   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:22.217044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:22.295667   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:22.470013   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:22.472491   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:22.716229   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:22.795547   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:22.971361   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:22.973555   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.218798   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:23.295952   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:23.470787   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:23.471794   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.717782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:23.794967   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:23.971377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.971940   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.216264   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:24.295490   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:24.469640   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.472079   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:24.715964   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:24.794965   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:24.970424   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.971540   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:25.216519   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:25.295424   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:25.469174   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:25.470358   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:25.715756   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:25.795192   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:25.971569   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:25.971644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:26.215842   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:26.295970   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:26.471107   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:26.471604   13262 kapi.go:107] duration metric: took 52.005591215s to wait for kubernetes.io/minikube-addons=registry ...
	I0721 23:27:26.717307   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:26.796477   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:26.972689   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:27.222357   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:27.299496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:27.469374   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:27.716294   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:27.796213   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:27.970386   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:28.217451   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:28.297532   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:28.471723   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:28.719852   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:28.795124   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:28.970406   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:29.217300   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:29.296938   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:29.470002   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:29.716681   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:29.795284   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:29.971969   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:30.216402   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:30.295689   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:30.469802   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:30.716428   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:30.795854   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:30.969881   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:31.216137   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:31.297543   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:31.469229   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:31.716630   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:31.794810   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:31.969692   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:32.238174   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:32.406493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:32.471592   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:32.717249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:32.795916   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:32.970354   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:33.216518   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:33.295490   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:33.469430   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:33.716184   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:33.800622   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.416330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.419614   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:34.421489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:34.469388   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:34.716641   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:34.794948   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.969900   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:35.216554   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:35.295889   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:35.470330   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:35.717578   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:35.796377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:35.970791   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:36.224894   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:36.296325   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:36.470644   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:36.719535   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:36.795443   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:36.970541   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:37.217122   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:37.295952   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:37.470176   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:37.721587   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:37.798527   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:37.970613   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:38.216309   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:38.295966   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:38.470646   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:38.716218   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:38.796020   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:38.970300   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:39.224327   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:39.302060   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:39.469649   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:39.716231   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:39.795749   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:39.969781   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:40.216496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:40.295858   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:40.470659   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:40.730246   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:40.795940   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:40.970298   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:41.240753   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:41.297754   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:41.470107   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:41.717249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:41.795788   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:41.969931   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:42.217848   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:42.295900   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:42.470816   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:42.718725   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:42.796392   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:42.970332   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:43.215893   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:43.295405   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:43.469806   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:43.716812   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:43.795741   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:43.970579   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:44.215908   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:44.295341   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:44.470543   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:44.716493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:44.796116   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:44.970457   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:45.216862   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:45.295416   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:45.470624   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:45.717356   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:45.796167   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:45.970596   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:46.216304   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:46.295693   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:46.472854   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:46.718301   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:46.795434   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:46.969008   13262 kapi.go:107] duration metric: took 1m12.503737006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0721 23:27:47.216822   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:47.295006   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:47.715974   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:47.795131   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:48.216244   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:48.295968   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:48.716696   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:48.795965   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:49.216473   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:49.296008   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:49.715950   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:49.795155   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:50.215963   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:50.295564   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:50.717601   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:50.795822   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:51.217524   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:51.295387   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:51.720770   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:51.802765   13262 kapi.go:107] duration metric: took 1m15.010883552s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0721 23:27:51.804163   13262 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-688294 cluster.
	I0721 23:27:51.805473   13262 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0721 23:27:51.806651   13262 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0721 23:27:52.216611   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:52.716080   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:53.221525   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:53.715386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:54.216544   13262 kapi.go:107] duration metric: took 1m19.005654586s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0721 23:27:54.218212   13262 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, storage-provisioner, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0721 23:27:54.219475   13262 addons.go:510] duration metric: took 1m27.747081657s for enable addons: enabled=[cloud-spanner default-storageclass ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller storage-provisioner metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0721 23:27:54.219522   13262 start.go:246] waiting for cluster config update ...
	I0721 23:27:54.219542   13262 start.go:255] writing updated cluster config ...
	I0721 23:27:54.219803   13262 ssh_runner.go:195] Run: rm -f paused
	I0721 23:27:54.269680   13262 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0721 23:27:54.271594   13262 out.go:177] * Done! kubectl is now configured to use "addons-688294" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.674694939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604643674667435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7412b944-36d6-43e2-ae8d-17f475b62411 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.675247925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07b19638-90c6-49f7-a899-1556be2e5b47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.675321053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07b19638-90c6-49f7-a899-1556be2e5b47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.675641744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac006a39a11cbc7ce6f68d1c7e7114fe1ddcb4bee444dcfa3ef43edb205e4628,PodSandboxId:ff47b7bdb2b5db5fe9e7c4c5104ea96f1c8a0921a3fc8df30426513a0ccb7e75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINE
R_EXITED,CreatedAt:1721604449608232109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dtc6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9,},Annotations:map[string]string{io.kubernetes.container.hash: e9fb6668,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d526fcd9f9fe0c3595afd522eca1205c12481098df67cc34a06379fc7ecab0,PodSandboxId:c049aba1a728541e0d16b8b0ba39243525a6c8cb8c24aee67226b1a496adcdfe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f617
5e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721604449469809897,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-grlnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e46919-7c5d-4ac4-9804-2a74d4842602,},Annotations:map[string]string{io.kubernetes.container.hash: 44a3a8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721604432542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c
969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandboxId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d1
0e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d7996262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111
269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSan
dboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada
85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07b19638-90c6-49f7-a899-1556be2e5b47 name=/runtime.v1.RuntimeService/L
istContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.709749929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=febbecb0-41e8-4db5-a498-bd0e8eee2d4a name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.709836259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=febbecb0-41e8-4db5-a498-bd0e8eee2d4a name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.711074859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6dbd5494-f650-4d98-852c-35969faab093 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.712465581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604643712438952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6dbd5494-f650-4d98-852c-35969faab093 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.713034317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dff9471-4b67-49c7-b0d4-80bda579c25e name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.713099469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dff9471-4b67-49c7-b0d4-80bda579c25e name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.713476908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac006a39a11cbc7ce6f68d1c7e7114fe1ddcb4bee444dcfa3ef43edb205e4628,PodSandboxId:ff47b7bdb2b5db5fe9e7c4c5104ea96f1c8a0921a3fc8df30426513a0ccb7e75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINE
R_EXITED,CreatedAt:1721604449608232109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dtc6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9,},Annotations:map[string]string{io.kubernetes.container.hash: e9fb6668,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d526fcd9f9fe0c3595afd522eca1205c12481098df67cc34a06379fc7ecab0,PodSandboxId:c049aba1a728541e0d16b8b0ba39243525a6c8cb8c24aee67226b1a496adcdfe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f617
5e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721604449469809897,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-grlnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e46919-7c5d-4ac4-9804-2a74d4842602,},Annotations:map[string]string{io.kubernetes.container.hash: 44a3a8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721604432542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c
969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandboxId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d1
0e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d7996262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111
269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSan
dboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada
85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dff9471-4b67-49c7-b0d4-80bda579c25e name=/runtime.v1.RuntimeService/L
istContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.745070828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30f8c6ab-1535-49da-b1a1-40e5b1ead470 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.745200557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30f8c6ab-1535-49da-b1a1-40e5b1ead470 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.746376534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16f8d629-b367-4e3d-8e5a-4b252f80721c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.747554546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604643747530623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16f8d629-b367-4e3d-8e5a-4b252f80721c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.748112014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c74448b5-5daf-41f6-93b4-1c00e13b7363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.748267660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c74448b5-5daf-41f6-93b4-1c00e13b7363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.749382352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac006a39a11cbc7ce6f68d1c7e7114fe1ddcb4bee444dcfa3ef43edb205e4628,PodSandboxId:ff47b7bdb2b5db5fe9e7c4c5104ea96f1c8a0921a3fc8df30426513a0ccb7e75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINE
R_EXITED,CreatedAt:1721604449608232109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dtc6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9,},Annotations:map[string]string{io.kubernetes.container.hash: e9fb6668,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d526fcd9f9fe0c3595afd522eca1205c12481098df67cc34a06379fc7ecab0,PodSandboxId:c049aba1a728541e0d16b8b0ba39243525a6c8cb8c24aee67226b1a496adcdfe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f617
5e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721604449469809897,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-grlnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e46919-7c5d-4ac4-9804-2a74d4842602,},Annotations:map[string]string{io.kubernetes.container.hash: 44a3a8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721604432542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c
969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandboxId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d1
0e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d7996262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111
269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSan
dboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada
85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c74448b5-5daf-41f6-93b4-1c00e13b7363 name=/runtime.v1.RuntimeService/L
istContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.791317472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8615d2e9-c409-4b31-b50e-634f98a746d8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.791392749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8615d2e9-c409-4b31-b50e-634f98a746d8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.792578030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e55a4a6a-903a-4906-9b85-8baef1aafeeb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.794006180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604643793922708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e55a4a6a-903a-4906-9b85-8baef1aafeeb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.794663120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76c3046d-d085-40d6-87f2-b11232294fa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.794729833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76c3046d-d085-40d6-87f2-b11232294fa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:30:43 addons-688294 crio[682]: time="2024-07-21 23:30:43.795041186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac006a39a11cbc7ce6f68d1c7e7114fe1ddcb4bee444dcfa3ef43edb205e4628,PodSandboxId:ff47b7bdb2b5db5fe9e7c4c5104ea96f1c8a0921a3fc8df30426513a0ccb7e75,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINE
R_EXITED,CreatedAt:1721604449608232109,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dtc6s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9,},Annotations:map[string]string{io.kubernetes.container.hash: e9fb6668,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d526fcd9f9fe0c3595afd522eca1205c12481098df67cc34a06379fc7ecab0,PodSandboxId:c049aba1a728541e0d16b8b0ba39243525a6c8cb8c24aee67226b1a496adcdfe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f617
5e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721604449469809897,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-grlnb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3e46919-7c5d-4ac4-9804-2a74d4842602,},Annotations:map[string]string{io.kubernetes.container.hash: 44a3a8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721604432542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e
412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-
provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},
Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c
969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandboxId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d1
0e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d7996262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111
269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSan
dboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada
85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76c3046d-d085-40d6-87f2-b11232294fa8 name=/runtime.v1.RuntimeService/L
istContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d15eff6453b4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   6723880a77daa       hello-world-app-6778b5fc9f-j4zfd
	e00dc37072b53       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   ee9bac8c8411d       nginx
	11795e013590a       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   e2d9f4cc3301f       headlamp-7867546754-2gjtz
	ec10fe9c60534       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   0926d78b49df1       gcp-auth-5db96cd9b4-56jkt
	ac006a39a11cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   ff47b7bdb2b5d       ingress-nginx-admission-patch-dtc6s
	53d526fcd9f9f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   c049aba1a7285       ingress-nginx-admission-create-grlnb
	5bf19c7fa60a6       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   7e2f0f95b9112       yakd-dashboard-799879c74f-7mmml
	b098dcd64eb0d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   95ebd78cf6a80       metrics-server-c59844bb4-bstqh
	216918ce9b7bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   7e607d1884ecd       storage-provisioner
	2da0f54f48878       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   514c9ecc5bf0d       coredns-7db6d8ff4d-wxvm9
	c969bfef3f523       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   43264f8b65dd3       kube-proxy-jcqpx
	a75ceaeb4ab41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   111269c244849       kube-controller-manager-addons-688294
	b7922c57b9139       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   3639e55de2a0b       etcd-addons-688294
	cbddd19a5edd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   7aa27e092fce3       kube-apiserver-addons-688294
	fb3b0c0d0677e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   6f4e5b3ada857       kube-scheduler-addons-688294
	
	
	==> coredns [2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999] <==
	[INFO] 10.244.0.7:44491 - 49631 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000442857s
	[INFO] 10.244.0.7:35435 - 56763 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142408s
	[INFO] 10.244.0.7:35435 - 43452 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104318s
	[INFO] 10.244.0.7:42446 - 28053 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089944s
	[INFO] 10.244.0.7:42446 - 52888 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177134s
	[INFO] 10.244.0.7:40235 - 46989 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105607s
	[INFO] 10.244.0.7:40235 - 35211 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00023455s
	[INFO] 10.244.0.7:58190 - 17849 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088716s
	[INFO] 10.244.0.7:58190 - 60855 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044261s
	[INFO] 10.244.0.7:46913 - 7652 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000057194s
	[INFO] 10.244.0.7:46913 - 13282 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034767s
	[INFO] 10.244.0.7:46291 - 42719 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041392s
	[INFO] 10.244.0.7:46291 - 16833 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034606s
	[INFO] 10.244.0.7:35794 - 28979 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124279s
	[INFO] 10.244.0.7:35794 - 47437 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118998s
	[INFO] 10.244.0.22:60002 - 20685 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307511s
	[INFO] 10.244.0.22:49713 - 47982 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143484s
	[INFO] 10.244.0.22:46007 - 19868 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180907s
	[INFO] 10.244.0.22:39339 - 65062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139681s
	[INFO] 10.244.0.22:58851 - 4840 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126748s
	[INFO] 10.244.0.22:33782 - 53305 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000195113s
	[INFO] 10.244.0.22:43856 - 8947 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000556053s
	[INFO] 10.244.0.22:46305 - 39605 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000583225s
	[INFO] 10.244.0.25:37615 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321788s
	[INFO] 10.244.0.25:48006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000219016s
	
	
	==> describe nodes <==
	Name:               addons-688294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-688294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=addons-688294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_26_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-688294
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:26:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-688294
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:30:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:29:16 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:29:16 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:29:16 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:29:16 +0000   Sun, 21 Jul 2024 23:26:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-688294
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 431e62ca24b4445b82feed907221d613
	  System UUID:                431e62ca-24b4-445b-82fe-ed907221d613
	  Boot ID:                    f5af4e40-e7a5-42da-a1c1-a4ffed10427f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-j4zfd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-56jkt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  headlamp                    headlamp-7867546754-2gjtz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-7db6d8ff4d-wxvm9                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m18s
	  kube-system                 etcd-addons-688294                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-apiserver-addons-688294             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-688294    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-jcqpx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-scheduler-addons-688294             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 metrics-server-c59844bb4-bstqh           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m13s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  yakd-dashboard              yakd-dashboard-799879c74f-7mmml          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m15s  kube-proxy       
	  Normal  Starting                 4m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m32s  kubelet          Node addons-688294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s  kubelet          Node addons-688294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s  kubelet          Node addons-688294 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m31s  kubelet          Node addons-688294 status is now: NodeReady
	  Normal  RegisteredNode           4m19s  node-controller  Node addons-688294 event: Registered Node addons-688294 in Controller
	
	
	==> dmesg <==
	[  +9.203854] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[  +5.215554] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.049865] kauditd_printk_skb: 163 callbacks suppressed
	[  +6.544472] kauditd_printk_skb: 36 callbacks suppressed
	[Jul21 23:27] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.659508] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.960010] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.211945] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.300505] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.196229] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.182157] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.052138] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.768790] kauditd_printk_skb: 47 callbacks suppressed
	[Jul21 23:28] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.611592] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.521517] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.195302] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.025632] kauditd_printk_skb: 9 callbacks suppressed
	[ +30.049239] kauditd_printk_skb: 23 callbacks suppressed
	[Jul21 23:29] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.133264] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.641905] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.326491] kauditd_printk_skb: 33 callbacks suppressed
	[Jul21 23:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.227683] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea] <==
	{"level":"warn","ts":"2024-07-21T23:27:34.403625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"490.775677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-21T23:27:34.403666Z","caller":"traceutil/trace.go:171","msg":"trace[688219142] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1041; }","duration":"490.841201ms","start":"2024-07-21T23:27:33.912817Z","end":"2024-07-21T23:27:34.403658Z","steps":["trace[688219142] 'count revisions from in-memory index tree'  (duration: 490.730217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.403687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:27:33.912799Z","time spent":"490.881389ms","remote":"127.0.0.1:59532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":21,"response size":31,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true "}
	{"level":"warn","ts":"2024-07-21T23:27:34.403749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.028631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-07-21T23:27:34.403848Z","caller":"traceutil/trace.go:171","msg":"trace[1474776033] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1041; }","duration":"117.15471ms","start":"2024-07-21T23:27:34.286685Z","end":"2024-07-21T23:27:34.403839Z","steps":["trace[1474776033] 'range keys from in-memory index tree'  (duration: 116.618975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.40382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.963641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-21T23:27:34.404109Z","caller":"traceutil/trace.go:171","msg":"trace[214288561] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1041; }","duration":"442.273326ms","start":"2024-07-21T23:27:33.961827Z","end":"2024-07-21T23:27:34.4041Z","steps":["trace[214288561] 'range keys from in-memory index tree'  (duration: 441.87972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.404241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:27:33.96181Z","time spent":"442.417029ms","remote":"127.0.0.1:59420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14387,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-21T23:28:01.363788Z","caller":"traceutil/trace.go:171","msg":"trace[1999037915] linearizableReadLoop","detail":"{readStateIndex:1252; appliedIndex:1251; }","duration":"158.291381ms","start":"2024-07-21T23:28:01.205475Z","end":"2024-07-21T23:28:01.363767Z","steps":["trace[1999037915] 'read index received'  (duration: 158.160994ms)","trace[1999037915] 'applied index is now lower than readState.Index'  (duration: 129.865µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-21T23:28:01.364185Z","caller":"traceutil/trace.go:171","msg":"trace[1446860162] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1216; }","duration":"352.618638ms","start":"2024-07-21T23:28:01.011509Z","end":"2024-07-21T23:28:01.364127Z","steps":["trace[1446860162] 'process raft request'  (duration: 352.165667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:01.364358Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:28:01.011496Z","time spent":"352.757616ms","remote":"127.0.0.1:59658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:871 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"warn","ts":"2024-07-21T23:28:01.364622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.152365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-21T23:28:01.365662Z","caller":"traceutil/trace.go:171","msg":"trace[1914586065] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1216; }","duration":"160.214386ms","start":"2024-07-21T23:28:01.205439Z","end":"2024-07-21T23:28:01.365653Z","steps":["trace[1914586065] 'agreement among raft nodes before linearized reading'  (duration: 159.07186ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:01.365193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.77129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.142\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-21T23:28:01.368999Z","caller":"traceutil/trace.go:171","msg":"trace[2032890233] range","detail":"{range_begin:/registry/masterleases/192.168.39.142; range_end:; response_count:1; response_revision:1216; }","duration":"110.598297ms","start":"2024-07-21T23:28:01.25839Z","end":"2024-07-21T23:28:01.368988Z","steps":["trace[2032890233] 'agreement among raft nodes before linearized reading'  (duration: 106.707832ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:28:22.3534Z","caller":"traceutil/trace.go:171","msg":"trace[372063583] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"288.013913ms","start":"2024-07-21T23:28:22.065359Z","end":"2024-07-21T23:28:22.353373Z","steps":["trace[372063583] 'process raft request'  (duration: 287.566862ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:22.35388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.83428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:28:22.354132Z","caller":"traceutil/trace.go:171","msg":"trace[1984484969] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1402; }","duration":"224.134616ms","start":"2024-07-21T23:28:22.129987Z","end":"2024-07-21T23:28:22.354122Z","steps":["trace[1984484969] 'agreement among raft nodes before linearized reading'  (duration: 223.838223ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:28:22.353776Z","caller":"traceutil/trace.go:171","msg":"trace[183201219] linearizableReadLoop","detail":"{readStateIndex:1447; appliedIndex:1446; }","duration":"223.107323ms","start":"2024-07-21T23:28:22.130013Z","end":"2024-07-21T23:28:22.35312Z","steps":["trace[183201219] 'read index received'  (duration: 222.972893ms)","trace[183201219] 'applied index is now lower than readState.Index'  (duration: 133.751µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-21T23:28:22.355349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.578485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-21T23:28:22.355436Z","caller":"traceutil/trace.go:171","msg":"trace[1840188235] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1402; }","duration":"167.690355ms","start":"2024-07-21T23:28:22.187738Z","end":"2024-07-21T23:28:22.355428Z","steps":["trace[1840188235] 'agreement among raft nodes before linearized reading'  (duration: 166.761075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:29:07.032367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.801652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9302"}
	{"level":"info","ts":"2024-07-21T23:29:07.032444Z","caller":"traceutil/trace.go:171","msg":"trace[1411897326] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1549; }","duration":"311.914818ms","start":"2024-07-21T23:29:06.720505Z","end":"2024-07-21T23:29:07.03242Z","steps":["trace[1411897326] 'range keys from in-memory index tree'  (duration: 311.612857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:29:07.032485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:29:06.720462Z","time spent":"312.012186ms","remote":"127.0.0.1:59420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":9326,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-21T23:29:12.69364Z","caller":"traceutil/trace.go:171","msg":"trace[1503670051] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"152.311423ms","start":"2024-07-21T23:29:12.541308Z","end":"2024-07-21T23:29:12.693619Z","steps":["trace[1503670051] 'process raft request'  (duration: 152.002057ms)"],"step_count":1}
	
	
	==> gcp-auth [ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5] <==
	2024/07/21 23:27:51 GCP Auth Webhook started!
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:28:00 Ready to marshal response ...
	2024/07/21 23:28:00 Ready to write response ...
	2024/07/21 23:28:06 Ready to marshal response ...
	2024/07/21 23:28:06 Ready to write response ...
	2024/07/21 23:28:13 Ready to marshal response ...
	2024/07/21 23:28:13 Ready to write response ...
	2024/07/21 23:28:19 Ready to marshal response ...
	2024/07/21 23:28:19 Ready to write response ...
	2024/07/21 23:28:19 Ready to marshal response ...
	2024/07/21 23:28:19 Ready to write response ...
	2024/07/21 23:28:31 Ready to marshal response ...
	2024/07/21 23:28:31 Ready to write response ...
	2024/07/21 23:28:59 Ready to marshal response ...
	2024/07/21 23:28:59 Ready to write response ...
	2024/07/21 23:29:31 Ready to marshal response ...
	2024/07/21 23:29:31 Ready to write response ...
	2024/07/21 23:30:33 Ready to marshal response ...
	2024/07/21 23:30:33 Ready to write response ...
	
	
	==> kernel <==
	 23:30:44 up 5 min,  0 users,  load average: 0.40, 1.06, 0.56
	Linux addons-688294 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0] <==
	W0721 23:28:12.190944       1 handler_proxy.go:93] no RequestInfo found in the context
	E0721 23:28:12.191015       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0721 23:28:12.191714       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.108.25:443: connect: connection refused
	E0721 23:28:12.193399       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.108.25:443: connect: connection refused
	I0721 23:28:12.264274       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0721 23:28:13.211526       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0721 23:28:13.399045       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.156.89"}
	I0721 23:28:14.225086       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0721 23:28:15.260435       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0721 23:28:47.244893       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0721 23:29:13.776721       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0721 23:29:46.396661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.397637       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.424564       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.424609       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.442001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.442049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.453866       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.453952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0721 23:29:47.426197       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0721 23:29:47.454108       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0721 23:29:47.481875       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0721 23:30:33.686824       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.233.65"}
	E0721 23:30:35.942526       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45] <==
	W0721 23:29:57.970101       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:29:57.970133       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:03.739809       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:03.739858       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:06.823754       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:06.823868       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:07.098030       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:07.098071       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:17.656917       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:17.657044       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:20.295407       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:20.295566       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:21.451864       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:21.451950       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:30:24.931243       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:30:24.931297       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0721 23:30:33.548553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.309334ms"
	I0721 23:30:33.557691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.045677ms"
	I0721 23:30:33.557925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="65.276µs"
	I0721 23:30:33.562238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="28.152µs"
	I0721 23:30:35.855112       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0721 23:30:35.860222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.318µs"
	I0721 23:30:35.865366       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0721 23:30:37.280643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.250909ms"
	I0721 23:30:37.281715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="24.139µs"
	
	
	==> kube-proxy [c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b] <==
	I0721 23:26:28.077713       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:26:28.089017       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	I0721 23:26:28.219692       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:26:28.219739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:26:28.219755       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:26:28.224119       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:26:28.224343       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:26:28.224364       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:26:28.225217       1 config.go:319] "Starting node config controller"
	I0721 23:26:28.225236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:26:28.225464       1 config.go:192] "Starting service config controller"
	I0721 23:26:28.225473       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:26:28.225493       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:26:28.225497       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:26:28.325936       1 shared_informer.go:320] Caches are synced for node config
	I0721 23:26:28.325953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:26:28.325964       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca] <==
	W0721 23:26:09.648718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:26:09.648748       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:26:09.648807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:09.648831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:09.650009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:26:09.650042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0721 23:26:10.452907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0721 23:26:10.452958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0721 23:26:10.514126       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:26:10.514215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:26:10.606629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.606756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:10.674409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0721 23:26:10.674454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0721 23:26:10.675246       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:26:10.675305       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:26:10.714219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.714330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:10.785423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:26:10.785554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0721 23:26:10.824394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:26:10.824440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0721 23:26:10.884238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.884282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0721 23:26:13.441723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 21 23:30:33 addons-688294 kubelet[1273]: I0721 23:30:33.541241    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86e378b-c880-4595-8d6e-08e01fb0245d" containerName="csi-provisioner"
	Jul 21 23:30:33 addons-688294 kubelet[1273]: I0721 23:30:33.611828    1273 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q59p\" (UniqueName: \"kubernetes.io/projected/7e0542de-ebc0-4bf8-81fa-be127d873ed9-kube-api-access-8q59p\") pod \"hello-world-app-6778b5fc9f-j4zfd\" (UID: \"7e0542de-ebc0-4bf8-81fa-be127d873ed9\") " pod="default/hello-world-app-6778b5fc9f-j4zfd"
	Jul 21 23:30:33 addons-688294 kubelet[1273]: I0721 23:30:33.611897    1273 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7e0542de-ebc0-4bf8-81fa-be127d873ed9-gcp-creds\") pod \"hello-world-app-6778b5fc9f-j4zfd\" (UID: \"7e0542de-ebc0-4bf8-81fa-be127d873ed9\") " pod="default/hello-world-app-6778b5fc9f-j4zfd"
	Jul 21 23:30:34 addons-688294 kubelet[1273]: I0721 23:30:34.620391    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt76g\" (UniqueName: \"kubernetes.io/projected/3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb-kube-api-access-qt76g\") pod \"3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb\" (UID: \"3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb\") "
	Jul 21 23:30:34 addons-688294 kubelet[1273]: I0721 23:30:34.622240    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb-kube-api-access-qt76g" (OuterVolumeSpecName: "kube-api-access-qt76g") pod "3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb" (UID: "3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb"). InnerVolumeSpecName "kube-api-access-qt76g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:30:34 addons-688294 kubelet[1273]: I0721 23:30:34.720950    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qt76g\" (UniqueName: \"kubernetes.io/projected/3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb-kube-api-access-qt76g\") on node \"addons-688294\" DevicePath \"\""
	Jul 21 23:30:35 addons-688294 kubelet[1273]: I0721 23:30:35.237211    1273 scope.go:117] "RemoveContainer" containerID="a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890"
	Jul 21 23:30:35 addons-688294 kubelet[1273]: I0721 23:30:35.273180    1273 scope.go:117] "RemoveContainer" containerID="a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890"
	Jul 21 23:30:35 addons-688294 kubelet[1273]: E0721 23:30:35.273861    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890\": container with ID starting with a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890 not found: ID does not exist" containerID="a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890"
	Jul 21 23:30:35 addons-688294 kubelet[1273]: I0721 23:30:35.273970    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890"} err="failed to get container status \"a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890\": rpc error: code = NotFound desc = could not find container \"a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890\": container with ID starting with a68d7cd0ac4579f2abf8db2e15cddf85a5f4afda76c4e9a369c894f9bfabd890 not found: ID does not exist"
	Jul 21 23:30:36 addons-688294 kubelet[1273]: I0721 23:30:35.999884    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb" path="/var/lib/kubelet/pods/3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb/volumes"
	Jul 21 23:30:36 addons-688294 kubelet[1273]: I0721 23:30:36.000340    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9" path="/var/lib/kubelet/pods/3cafbe5c-1562-4ffe-9a30-dc3a5b78eaf9/volumes"
	Jul 21 23:30:36 addons-688294 kubelet[1273]: I0721 23:30:36.000740    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e46919-7c5d-4ac4-9804-2a74d4842602" path="/var/lib/kubelet/pods/b3e46919-7c5d-4ac4-9804-2a74d4842602/volumes"
	Jul 21 23:30:37 addons-688294 kubelet[1273]: I0721 23:30:37.270604    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-j4zfd" podStartSLOduration=1.993649128 podStartE2EDuration="4.270575024s" podCreationTimestamp="2024-07-21 23:30:33 +0000 UTC" firstStartedPulling="2024-07-21 23:30:34.099131943 +0000 UTC m=+262.236747312" lastFinishedPulling="2024-07-21 23:30:36.376057836 +0000 UTC m=+264.513673208" observedRunningTime="2024-07-21 23:30:37.270459508 +0000 UTC m=+265.408074896" watchObservedRunningTime="2024-07-21 23:30:37.270575024 +0000 UTC m=+265.408190412"
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.153686    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnjg\" (UniqueName: \"kubernetes.io/projected/1dd164e0-81d7-4889-8624-214c83da34d7-kube-api-access-njnjg\") pod \"1dd164e0-81d7-4889-8624-214c83da34d7\" (UID: \"1dd164e0-81d7-4889-8624-214c83da34d7\") "
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.153740    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1dd164e0-81d7-4889-8624-214c83da34d7-webhook-cert\") pod \"1dd164e0-81d7-4889-8624-214c83da34d7\" (UID: \"1dd164e0-81d7-4889-8624-214c83da34d7\") "
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.158299    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd164e0-81d7-4889-8624-214c83da34d7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1dd164e0-81d7-4889-8624-214c83da34d7" (UID: "1dd164e0-81d7-4889-8624-214c83da34d7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.160307    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd164e0-81d7-4889-8624-214c83da34d7-kube-api-access-njnjg" (OuterVolumeSpecName: "kube-api-access-njnjg") pod "1dd164e0-81d7-4889-8624-214c83da34d7" (UID: "1dd164e0-81d7-4889-8624-214c83da34d7"). InnerVolumeSpecName "kube-api-access-njnjg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.254326    1273 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1dd164e0-81d7-4889-8624-214c83da34d7-webhook-cert\") on node \"addons-688294\" DevicePath \"\""
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.254361    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-njnjg\" (UniqueName: \"kubernetes.io/projected/1dd164e0-81d7-4889-8624-214c83da34d7-kube-api-access-njnjg\") on node \"addons-688294\" DevicePath \"\""
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.270089    1273 scope.go:117] "RemoveContainer" containerID="1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13"
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.287246    1273 scope.go:117] "RemoveContainer" containerID="1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13"
	Jul 21 23:30:39 addons-688294 kubelet[1273]: E0721 23:30:39.287621    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13\": container with ID starting with 1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13 not found: ID does not exist" containerID="1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13"
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.287659    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13"} err="failed to get container status \"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13\": rpc error: code = NotFound desc = could not find container \"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13\": container with ID starting with 1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13 not found: ID does not exist"
	Jul 21 23:30:40 addons-688294 kubelet[1273]: I0721 23:30:40.000292    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd164e0-81d7-4889-8624-214c83da34d7" path="/var/lib/kubelet/pods/1dd164e0-81d7-4889-8624-214c83da34d7/volumes"
	
	
	==> storage-provisioner [216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f] <==
	I0721 23:26:32.965409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0721 23:26:32.989049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0721 23:26:32.989213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0721 23:26:33.013356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0721 23:26:33.013501       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-688294_31961987-1828-4958-ba43-c9112c88d31d!
	I0721 23:26:33.014273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e6cc932-54bb-49fd-b538-3f5ffac98293", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-688294_31961987-1828-4958-ba43-c9112c88d31d became leader
	I0721 23:26:33.114236       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-688294_31961987-1828-4958-ba43-c9112c88d31d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-688294 -n addons-688294
helpers_test.go:261: (dbg) Run:  kubectl --context addons-688294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (322.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.034863ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
helpers_test.go:344: "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Running
2024/07/21 23:28:12 [DEBUG] GET http://192.168.39.142:5000
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008930488s
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (79.587986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-688294, age: 2m2.308279143s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (65.932081ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-688294, age: 2m4.852006905s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (61.731001ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-688294, age: 2m7.801094142s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (90.843838ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-688294, age: 2m11.523617677s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (64.47318ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 2m2.226045704s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (61.651764ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 2m11.826421027s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (66.386144ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 2m43.234249933s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (65.776791ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 3m14.751411064s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (62.970957ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 4m20.245114036s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (62.610451ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 5m40.153539999s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-688294 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-688294 top pods -n kube-system: exit status 1 (65.295356ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wxvm9, age: 7m2.47675022s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-688294 -n addons-688294
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 logs -n 25: (1.29998811s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-870595                                                                     | download-only-870595 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-825436                                                                     | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-576339                                                                     | download-only-576339 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-870595                                                                     | download-only-870595 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-302887 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | binary-mirror-302887                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36193                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-302887                                                                     | binary-mirror-302887 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| addons  | disable dashboard -p                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-688294 --wait=true                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:27 UTC | 21 Jul 24 23:27 UTC |
	|         | -p addons-688294                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | -p addons-688294                                                                            |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-688294 ip                                                                            | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-688294 ssh curl -s                                                                   | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-688294 ssh cat                                                                       | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:28 UTC |
	|         | /opt/local-path-provisioner/pvc-46a377b6-b11e-4fc9-9633-78f2e49f996d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:28 UTC | 21 Jul 24 23:29 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | addons-688294                                                                               |                      |         |         |                     |                     |
	| addons  | addons-688294 addons                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-688294 addons                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:29 UTC | 21 Jul 24 23:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-688294 ip                                                                            | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-688294 addons disable                                                                | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:30 UTC | 21 Jul 24 23:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-688294 addons                                                                        | addons-688294        | jenkins | v1.33.1 | 21 Jul 24 23:33 UTC | 21 Jul 24 23:33 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:25:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:25:33.362987   13262 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:25:33.363081   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:33.363088   13262 out.go:304] Setting ErrFile to fd 2...
	I0721 23:25:33.363093   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:33.363238   13262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:25:33.363820   13262 out.go:298] Setting JSON to false
	I0721 23:25:33.364593   13262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":477,"bootTime":1721603856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:25:33.364649   13262 start.go:139] virtualization: kvm guest
	I0721 23:25:33.366935   13262 out.go:177] * [addons-688294] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:25:33.368340   13262 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:25:33.368351   13262 notify.go:220] Checking for updates...
	I0721 23:25:33.370905   13262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:25:33.372338   13262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:25:33.373521   13262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:33.374884   13262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:25:33.376082   13262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:25:33.377423   13262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:25:33.407892   13262 out.go:177] * Using the kvm2 driver based on user configuration
	I0721 23:25:33.409017   13262 start.go:297] selected driver: kvm2
	I0721 23:25:33.409033   13262 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:25:33.409043   13262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:25:33.409651   13262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:33.409710   13262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:25:33.423454   13262 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:25:33.423499   13262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:25:33.423706   13262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:25:33.423746   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:25:33.423753   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:25:33.423763   13262 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:25:33.423821   13262 start.go:340] cluster config:
	{Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:25:33.423908   13262 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:33.425757   13262 out.go:177] * Starting "addons-688294" primary control-plane node in "addons-688294" cluster
	I0721 23:25:33.426813   13262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:33.426840   13262 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:25:33.426849   13262 cache.go:56] Caching tarball of preloaded images
	I0721 23:25:33.426925   13262 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:25:33.426938   13262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:25:33.427223   13262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json ...
	I0721 23:25:33.427242   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json: {Name:mka4e120652124e50c186dfd7958e54dc35e98eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:25:33.427403   13262 start.go:360] acquireMachinesLock for addons-688294: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:25:33.427460   13262 start.go:364] duration metric: took 40.193µs to acquireMachinesLock for "addons-688294"
	I0721 23:25:33.427483   13262 start.go:93] Provisioning new machine with config: &{Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:25:33.427538   13262 start.go:125] createHost starting for "" (driver="kvm2")
	I0721 23:25:33.429092   13262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0721 23:25:33.429215   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:25:33.429253   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:25:33.443407   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0721 23:25:33.443789   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:25:33.444311   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:25:33.444347   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:25:33.444666   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:25:33.444857   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:33.444992   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:33.445133   13262 start.go:159] libmachine.API.Create for "addons-688294" (driver="kvm2")
	I0721 23:25:33.445178   13262 client.go:168] LocalClient.Create starting
	I0721 23:25:33.445222   13262 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:25:33.521741   13262 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:25:33.595014   13262 main.go:141] libmachine: Running pre-create checks...
	I0721 23:25:33.595036   13262 main.go:141] libmachine: (addons-688294) Calling .PreCreateCheck
	I0721 23:25:33.595553   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:33.595996   13262 main.go:141] libmachine: Creating machine...
	I0721 23:25:33.596009   13262 main.go:141] libmachine: (addons-688294) Calling .Create
	I0721 23:25:33.596134   13262 main.go:141] libmachine: (addons-688294) Creating KVM machine...
	I0721 23:25:33.597178   13262 main.go:141] libmachine: (addons-688294) DBG | found existing default KVM network
	I0721 23:25:33.597905   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.597774   13284 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0721 23:25:33.597948   13262 main.go:141] libmachine: (addons-688294) DBG | created network xml: 
	I0721 23:25:33.597967   13262 main.go:141] libmachine: (addons-688294) DBG | <network>
	I0721 23:25:33.597999   13262 main.go:141] libmachine: (addons-688294) DBG |   <name>mk-addons-688294</name>
	I0721 23:25:33.598009   13262 main.go:141] libmachine: (addons-688294) DBG |   <dns enable='no'/>
	I0721 23:25:33.598015   13262 main.go:141] libmachine: (addons-688294) DBG |   
	I0721 23:25:33.598025   13262 main.go:141] libmachine: (addons-688294) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0721 23:25:33.598033   13262 main.go:141] libmachine: (addons-688294) DBG |     <dhcp>
	I0721 23:25:33.598039   13262 main.go:141] libmachine: (addons-688294) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0721 23:25:33.598044   13262 main.go:141] libmachine: (addons-688294) DBG |     </dhcp>
	I0721 23:25:33.598049   13262 main.go:141] libmachine: (addons-688294) DBG |   </ip>
	I0721 23:25:33.598053   13262 main.go:141] libmachine: (addons-688294) DBG |   
	I0721 23:25:33.598058   13262 main.go:141] libmachine: (addons-688294) DBG | </network>
	I0721 23:25:33.598064   13262 main.go:141] libmachine: (addons-688294) DBG | 
	I0721 23:25:33.603212   13262 main.go:141] libmachine: (addons-688294) DBG | trying to create private KVM network mk-addons-688294 192.168.39.0/24...
	I0721 23:25:33.666394   13262 main.go:141] libmachine: (addons-688294) DBG | private KVM network mk-addons-688294 192.168.39.0/24 created
	I0721 23:25:33.666418   13262 main.go:141] libmachine: (addons-688294) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 ...
	I0721 23:25:33.666435   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.666348   13284 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:33.666458   13262 main.go:141] libmachine: (addons-688294) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:25:33.666485   13262 main.go:141] libmachine: (addons-688294) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:25:33.917964   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:33.917842   13284 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa...
	I0721 23:25:34.048910   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:34.048732   13284 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/addons-688294.rawdisk...
	I0721 23:25:34.048936   13262 main.go:141] libmachine: (addons-688294) DBG | Writing magic tar header
	I0721 23:25:34.048945   13262 main.go:141] libmachine: (addons-688294) DBG | Writing SSH key tar header
	I0721 23:25:34.048953   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:34.048876   13284 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 ...
	I0721 23:25:34.048964   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294
	I0721 23:25:34.048984   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:25:34.049000   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:34.049012   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294 (perms=drwx------)
	I0721 23:25:34.049025   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:25:34.049032   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:25:34.049061   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:25:34.049073   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:25:34.049085   13262 main.go:141] libmachine: (addons-688294) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:25:34.049102   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:25:34.049113   13262 main.go:141] libmachine: (addons-688294) Creating domain...
	I0721 23:25:34.049123   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:25:34.049130   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:25:34.049137   13262 main.go:141] libmachine: (addons-688294) DBG | Checking permissions on dir: /home
	I0721 23:25:34.049145   13262 main.go:141] libmachine: (addons-688294) DBG | Skipping /home - not owner
	I0721 23:25:34.050083   13262 main.go:141] libmachine: (addons-688294) define libvirt domain using xml: 
	I0721 23:25:34.050112   13262 main.go:141] libmachine: (addons-688294) <domain type='kvm'>
	I0721 23:25:34.050120   13262 main.go:141] libmachine: (addons-688294)   <name>addons-688294</name>
	I0721 23:25:34.050126   13262 main.go:141] libmachine: (addons-688294)   <memory unit='MiB'>4000</memory>
	I0721 23:25:34.050131   13262 main.go:141] libmachine: (addons-688294)   <vcpu>2</vcpu>
	I0721 23:25:34.050135   13262 main.go:141] libmachine: (addons-688294)   <features>
	I0721 23:25:34.050141   13262 main.go:141] libmachine: (addons-688294)     <acpi/>
	I0721 23:25:34.050146   13262 main.go:141] libmachine: (addons-688294)     <apic/>
	I0721 23:25:34.050153   13262 main.go:141] libmachine: (addons-688294)     <pae/>
	I0721 23:25:34.050157   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050169   13262 main.go:141] libmachine: (addons-688294)   </features>
	I0721 23:25:34.050174   13262 main.go:141] libmachine: (addons-688294)   <cpu mode='host-passthrough'>
	I0721 23:25:34.050179   13262 main.go:141] libmachine: (addons-688294)   
	I0721 23:25:34.050185   13262 main.go:141] libmachine: (addons-688294)   </cpu>
	I0721 23:25:34.050190   13262 main.go:141] libmachine: (addons-688294)   <os>
	I0721 23:25:34.050197   13262 main.go:141] libmachine: (addons-688294)     <type>hvm</type>
	I0721 23:25:34.050202   13262 main.go:141] libmachine: (addons-688294)     <boot dev='cdrom'/>
	I0721 23:25:34.050211   13262 main.go:141] libmachine: (addons-688294)     <boot dev='hd'/>
	I0721 23:25:34.050234   13262 main.go:141] libmachine: (addons-688294)     <bootmenu enable='no'/>
	I0721 23:25:34.050252   13262 main.go:141] libmachine: (addons-688294)   </os>
	I0721 23:25:34.050259   13262 main.go:141] libmachine: (addons-688294)   <devices>
	I0721 23:25:34.050265   13262 main.go:141] libmachine: (addons-688294)     <disk type='file' device='cdrom'>
	I0721 23:25:34.050285   13262 main.go:141] libmachine: (addons-688294)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/boot2docker.iso'/>
	I0721 23:25:34.050296   13262 main.go:141] libmachine: (addons-688294)       <target dev='hdc' bus='scsi'/>
	I0721 23:25:34.050309   13262 main.go:141] libmachine: (addons-688294)       <readonly/>
	I0721 23:25:34.050323   13262 main.go:141] libmachine: (addons-688294)     </disk>
	I0721 23:25:34.050332   13262 main.go:141] libmachine: (addons-688294)     <disk type='file' device='disk'>
	I0721 23:25:34.050340   13262 main.go:141] libmachine: (addons-688294)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:25:34.050354   13262 main.go:141] libmachine: (addons-688294)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/addons-688294.rawdisk'/>
	I0721 23:25:34.050366   13262 main.go:141] libmachine: (addons-688294)       <target dev='hda' bus='virtio'/>
	I0721 23:25:34.050374   13262 main.go:141] libmachine: (addons-688294)     </disk>
	I0721 23:25:34.050385   13262 main.go:141] libmachine: (addons-688294)     <interface type='network'>
	I0721 23:25:34.050526   13262 main.go:141] libmachine: (addons-688294)       <source network='mk-addons-688294'/>
	I0721 23:25:34.050565   13262 main.go:141] libmachine: (addons-688294)       <model type='virtio'/>
	I0721 23:25:34.050582   13262 main.go:141] libmachine: (addons-688294)     </interface>
	I0721 23:25:34.050593   13262 main.go:141] libmachine: (addons-688294)     <interface type='network'>
	I0721 23:25:34.050624   13262 main.go:141] libmachine: (addons-688294)       <source network='default'/>
	I0721 23:25:34.050640   13262 main.go:141] libmachine: (addons-688294)       <model type='virtio'/>
	I0721 23:25:34.050651   13262 main.go:141] libmachine: (addons-688294)     </interface>
	I0721 23:25:34.050661   13262 main.go:141] libmachine: (addons-688294)     <serial type='pty'>
	I0721 23:25:34.050671   13262 main.go:141] libmachine: (addons-688294)       <target port='0'/>
	I0721 23:25:34.050681   13262 main.go:141] libmachine: (addons-688294)     </serial>
	I0721 23:25:34.050687   13262 main.go:141] libmachine: (addons-688294)     <console type='pty'>
	I0721 23:25:34.050701   13262 main.go:141] libmachine: (addons-688294)       <target type='serial' port='0'/>
	I0721 23:25:34.050725   13262 main.go:141] libmachine: (addons-688294)     </console>
	I0721 23:25:34.050745   13262 main.go:141] libmachine: (addons-688294)     <rng model='virtio'>
	I0721 23:25:34.050755   13262 main.go:141] libmachine: (addons-688294)       <backend model='random'>/dev/random</backend>
	I0721 23:25:34.050762   13262 main.go:141] libmachine: (addons-688294)     </rng>
	I0721 23:25:34.050778   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050788   13262 main.go:141] libmachine: (addons-688294)     
	I0721 23:25:34.050797   13262 main.go:141] libmachine: (addons-688294)   </devices>
	I0721 23:25:34.050803   13262 main.go:141] libmachine: (addons-688294) </domain>
	I0721 23:25:34.050811   13262 main.go:141] libmachine: (addons-688294) 
	I0721 23:25:34.056668   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:55:e7:28 in network default
	I0721 23:25:34.057187   13262 main.go:141] libmachine: (addons-688294) Ensuring networks are active...
	I0721 23:25:34.057209   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:34.057844   13262 main.go:141] libmachine: (addons-688294) Ensuring network default is active
	I0721 23:25:34.058153   13262 main.go:141] libmachine: (addons-688294) Ensuring network mk-addons-688294 is active
	I0721 23:25:34.058898   13262 main.go:141] libmachine: (addons-688294) Getting domain xml...
	I0721 23:25:34.059566   13262 main.go:141] libmachine: (addons-688294) Creating domain...
	I0721 23:25:35.417351   13262 main.go:141] libmachine: (addons-688294) Waiting to get IP...
	I0721 23:25:35.418100   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:35.418461   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:35.418498   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:35.418451   13284 retry.go:31] will retry after 244.984124ms: waiting for machine to come up
	I0721 23:25:35.665004   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:35.665494   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:35.665537   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:35.665421   13284 retry.go:31] will retry after 350.812456ms: waiting for machine to come up
	I0721 23:25:36.017933   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.018350   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.018377   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.018294   13284 retry.go:31] will retry after 427.547876ms: waiting for machine to come up
	I0721 23:25:36.447874   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.448342   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.448377   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.448299   13284 retry.go:31] will retry after 508.437364ms: waiting for machine to come up
	I0721 23:25:36.957853   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:36.958168   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:36.958205   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:36.958127   13284 retry.go:31] will retry after 464.500826ms: waiting for machine to come up
	I0721 23:25:37.423770   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:37.424113   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:37.424136   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:37.424065   13284 retry.go:31] will retry after 754.05099ms: waiting for machine to come up
	I0721 23:25:38.181249   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:38.181690   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:38.181719   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:38.181638   13284 retry.go:31] will retry after 1.011173963s: waiting for machine to come up
	I0721 23:25:39.194108   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:39.194535   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:39.194569   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:39.194521   13284 retry.go:31] will retry after 1.205743617s: waiting for machine to come up
	I0721 23:25:40.401844   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:40.402201   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:40.402223   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:40.402151   13284 retry.go:31] will retry after 1.132035307s: waiting for machine to come up
	I0721 23:25:41.536536   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:41.536921   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:41.536947   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:41.536872   13284 retry.go:31] will retry after 2.169565885s: waiting for machine to come up
	I0721 23:25:43.708006   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:43.708394   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:43.708443   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:43.708364   13284 retry.go:31] will retry after 2.482734773s: waiting for machine to come up
	I0721 23:25:46.194027   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:46.194520   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:46.194544   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:46.194469   13284 retry.go:31] will retry after 2.973617951s: waiting for machine to come up
	I0721 23:25:49.170164   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:49.170530   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find current IP address of domain addons-688294 in network mk-addons-688294
	I0721 23:25:49.170552   13262 main.go:141] libmachine: (addons-688294) DBG | I0721 23:25:49.170498   13284 retry.go:31] will retry after 4.464588507s: waiting for machine to come up
	I0721 23:25:53.637069   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.637655   13262 main.go:141] libmachine: (addons-688294) Found IP for machine: 192.168.39.142
	I0721 23:25:53.637689   13262 main.go:141] libmachine: (addons-688294) Reserving static IP address...
	I0721 23:25:53.637703   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has current primary IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.637980   13262 main.go:141] libmachine: (addons-688294) DBG | unable to find host DHCP lease matching {name: "addons-688294", mac: "52:54:00:58:13:11", ip: "192.168.39.142"} in network mk-addons-688294
	I0721 23:25:53.708516   13262 main.go:141] libmachine: (addons-688294) DBG | Getting to WaitForSSH function...
	I0721 23:25:53.708583   13262 main.go:141] libmachine: (addons-688294) Reserved static IP address: 192.168.39.142
	I0721 23:25:53.708600   13262 main.go:141] libmachine: (addons-688294) Waiting for SSH to be available...
	I0721 23:25:53.710621   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.710977   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.711003   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.711154   13262 main.go:141] libmachine: (addons-688294) DBG | Using SSH client type: external
	I0721 23:25:53.711182   13262 main.go:141] libmachine: (addons-688294) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa (-rw-------)
	I0721 23:25:53.711209   13262 main.go:141] libmachine: (addons-688294) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:25:53.711250   13262 main.go:141] libmachine: (addons-688294) DBG | About to run SSH command:
	I0721 23:25:53.711267   13262 main.go:141] libmachine: (addons-688294) DBG | exit 0
	I0721 23:25:53.846738   13262 main.go:141] libmachine: (addons-688294) DBG | SSH cmd err, output: <nil>: 
	I0721 23:25:53.846974   13262 main.go:141] libmachine: (addons-688294) KVM machine creation complete!
	I0721 23:25:53.847307   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:53.847872   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:53.848116   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:53.848275   13262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:25:53.848290   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:25:53.849611   13262 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:25:53.849625   13262 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:25:53.849631   13262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:25:53.849637   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:53.852238   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.852617   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.852645   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.852800   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:53.852983   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.853118   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.853232   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:53.853388   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:53.853646   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:53.853662   13262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:25:53.965659   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:25:53.965705   13262 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:25:53.965718   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:53.968428   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.968848   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:53.968874   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:53.968963   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:53.969177   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.969365   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:53.969540   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:53.969696   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:53.969858   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:53.969867   13262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:25:54.082831   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:25:54.082908   13262 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:25:54.082917   13262 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:25:54.082924   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.083145   13262 buildroot.go:166] provisioning hostname "addons-688294"
	I0721 23:25:54.083169   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.083323   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.085689   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.086017   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.086041   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.086167   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.086356   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.086537   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.086705   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.086856   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.087057   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.087071   13262 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-688294 && echo "addons-688294" | sudo tee /etc/hostname
	I0721 23:25:54.211308   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-688294
	
	I0721 23:25:54.211337   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.213753   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.214079   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.214107   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.214254   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.214463   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.214644   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.214794   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.214966   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.215189   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.215208   13262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-688294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-688294/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-688294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:25:54.335254   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:25:54.335290   13262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:25:54.335325   13262 buildroot.go:174] setting up certificates
	I0721 23:25:54.335344   13262 provision.go:84] configureAuth start
	I0721 23:25:54.335360   13262 main.go:141] libmachine: (addons-688294) Calling .GetMachineName
	I0721 23:25:54.335660   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:54.337920   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.338309   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.338348   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.338497   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.340292   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.340599   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.340632   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.340702   13262 provision.go:143] copyHostCerts
	I0721 23:25:54.340783   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:25:54.340937   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:25:54.341011   13262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:25:54.341072   13262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.addons-688294 san=[127.0.0.1 192.168.39.142 addons-688294 localhost minikube]
	I0721 23:25:54.546661   13262 provision.go:177] copyRemoteCerts
	I0721 23:25:54.546714   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:25:54.546735   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.549037   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.549383   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.549417   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.549629   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.549838   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.550001   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.550109   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:54.636210   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:25:54.658477   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:25:54.679920   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:25:54.701150   13262 provision.go:87] duration metric: took 365.790069ms to configureAuth
	I0721 23:25:54.701176   13262 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:25:54.701408   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:25:54.701506   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.703970   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.704305   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.704338   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.704448   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.704626   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.704787   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.704914   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.705077   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:54.705263   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:54.705286   13262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:25:54.964962   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:25:54.964985   13262 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:25:54.964992   13262 main.go:141] libmachine: (addons-688294) Calling .GetURL
	I0721 23:25:54.966426   13262 main.go:141] libmachine: (addons-688294) DBG | Using libvirt version 6000000
	I0721 23:25:54.968741   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.969081   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.969109   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.969234   13262 main.go:141] libmachine: Docker is up and running!
	I0721 23:25:54.969247   13262 main.go:141] libmachine: Reticulating splines...
	I0721 23:25:54.969254   13262 client.go:171] duration metric: took 21.524065935s to LocalClient.Create
	I0721 23:25:54.969275   13262 start.go:167] duration metric: took 21.524142859s to libmachine.API.Create "addons-688294"
	I0721 23:25:54.969293   13262 start.go:293] postStartSetup for "addons-688294" (driver="kvm2")
	I0721 23:25:54.969305   13262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:25:54.969322   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:54.969547   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:25:54.969570   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:54.971881   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.972200   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:54.972218   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:54.972388   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:54.972554   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:54.972692   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:54.972797   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.060553   13262 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:25:55.064646   13262 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:25:55.064674   13262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:25:55.064743   13262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:25:55.064772   13262 start.go:296] duration metric: took 95.471001ms for postStartSetup
	I0721 23:25:55.064813   13262 main.go:141] libmachine: (addons-688294) Calling .GetConfigRaw
	I0721 23:25:55.107523   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:55.110306   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.110661   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.110690   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.110928   13262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/config.json ...
	I0721 23:25:55.169976   13262 start.go:128] duration metric: took 21.74242165s to createHost
	I0721 23:25:55.170015   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.173313   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.173672   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.173718   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.173870   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.174100   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.174275   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.174406   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.174634   13262 main.go:141] libmachine: Using SSH client type: native
	I0721 23:25:55.174834   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0721 23:25:55.174846   13262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:25:55.287228   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721604355.262674363
	
	I0721 23:25:55.287250   13262 fix.go:216] guest clock: 1721604355.262674363
	I0721 23:25:55.287259   13262 fix.go:229] Guest: 2024-07-21 23:25:55.262674363 +0000 UTC Remote: 2024-07-21 23:25:55.16999872 +0000 UTC m=+21.837725633 (delta=92.675643ms)
	I0721 23:25:55.287283   13262 fix.go:200] guest clock delta is within tolerance: 92.675643ms
	I0721 23:25:55.287289   13262 start.go:83] releasing machines lock for "addons-688294", held for 21.859817716s
	I0721 23:25:55.287311   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.287564   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:55.290090   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.290437   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.290462   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.290682   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291117   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291301   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:25:55.291383   13262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:25:55.291434   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.291537   13262 ssh_runner.go:195] Run: cat /version.json
	I0721 23:25:55.291562   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:25:55.294042   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294300   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294503   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.294529   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294651   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.294813   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.294814   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:55.294884   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:55.294969   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:25:55.295019   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.295129   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.295207   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:25:55.295404   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:25:55.295566   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:25:55.375642   13262 ssh_runner.go:195] Run: systemctl --version
	I0721 23:25:55.423790   13262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:25:56.001801   13262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:25:56.007367   13262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:25:56.007434   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:25:56.023304   13262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:25:56.023338   13262 start.go:495] detecting cgroup driver to use...
	I0721 23:25:56.023397   13262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:25:56.039561   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:25:56.051900   13262 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:25:56.051946   13262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:25:56.064482   13262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:25:56.077385   13262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:25:56.187776   13262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:25:56.322445   13262 docker.go:233] disabling docker service ...
	I0721 23:25:56.322513   13262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:25:56.336618   13262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:25:56.348225   13262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:25:56.471141   13262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:25:56.599056   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:25:56.611867   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:25:56.628841   13262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:25:56.628905   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.638519   13262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:25:56.638581   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.648122   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.657478   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.666927   13262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:25:56.676578   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.685999   13262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.701302   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:25:56.710924   13262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:25:56.719583   13262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:25:56.719637   13262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:25:56.730999   13262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:25:56.739812   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:25:56.855701   13262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:25:56.988967   13262 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:25:56.989057   13262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:25:56.993275   13262 start.go:563] Will wait 60s for crictl version
	I0721 23:25:56.993369   13262 ssh_runner.go:195] Run: which crictl
	I0721 23:25:56.996719   13262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:25:57.033935   13262 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:25:57.034057   13262 ssh_runner.go:195] Run: crio --version
	I0721 23:25:57.060539   13262 ssh_runner.go:195] Run: crio --version
	I0721 23:25:57.088942   13262 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:25:57.090390   13262 main.go:141] libmachine: (addons-688294) Calling .GetIP
	I0721 23:25:57.092913   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:57.093289   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:25:57.093315   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:25:57.093651   13262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:25:57.097474   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:25:57.109324   13262 kubeadm.go:883] updating cluster {Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:25:57.109451   13262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:57.109507   13262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:25:57.138158   13262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0721 23:25:57.138238   13262 ssh_runner.go:195] Run: which lz4
	I0721 23:25:57.141792   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 23:25:57.145491   13262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 23:25:57.145519   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0721 23:25:58.252915   13262 crio.go:462] duration metric: took 1.111140121s to copy over tarball
	I0721 23:25:58.252991   13262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 23:26:00.453629   13262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200606224s)
	I0721 23:26:00.453665   13262 crio.go:469] duration metric: took 2.200720769s to extract the tarball
	I0721 23:26:00.453675   13262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 23:26:00.495754   13262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:26:00.537230   13262 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:26:00.537255   13262 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:26:00.537264   13262 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.30.3 crio true true} ...
	I0721 23:26:00.537391   13262 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-688294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:26:00.537473   13262 ssh_runner.go:195] Run: crio config
	I0721 23:26:00.578905   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:26:00.578923   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:26:00.578932   13262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:26:00.578957   13262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-688294 NodeName:addons-688294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:26:00.579143   13262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-688294"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:26:00.579208   13262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:26:00.588452   13262 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:26:00.588517   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 23:26:00.597034   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0721 23:26:00.611832   13262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:26:00.626325   13262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0721 23:26:00.643052   13262 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0721 23:26:00.646647   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:26:00.657742   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:26:00.764511   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:26:00.779994   13262 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294 for IP: 192.168.39.142
	I0721 23:26:00.780010   13262 certs.go:194] generating shared ca certs ...
	I0721 23:26:00.780024   13262 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.780160   13262 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:26:00.916144   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt ...
	I0721 23:26:00.916179   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt: {Name:mk13f89e22caf5001d08863d12b0cbb363da5b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.916375   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key ...
	I0721 23:26:00.916391   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key: {Name:mkd5a701b56963d453c76ebba0190d75523b6b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:00.916506   13262 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:26:01.040049   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt ...
	I0721 23:26:01.040078   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt: {Name:mk56b5fbecd9bed1d6a729844440840ef853de54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.040262   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key ...
	I0721 23:26:01.040276   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key: {Name:mkb1fc6e8f2aa4018dca66106de7aad53ea9ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.040387   13262 certs.go:256] generating profile certs ...
	I0721 23:26:01.040444   13262 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key
	I0721 23:26:01.040459   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt with IP's: []
	I0721 23:26:01.143847   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt ...
	I0721 23:26:01.143881   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: {Name:mk502a02dd0545f610ec2430272e7dc34e6c9e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.144223   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key ...
	I0721 23:26:01.144248   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.key: {Name:mkda774d18c002fe67c556b5bb5c0ea8990bdd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.144396   13262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e
	I0721 23:26:01.144416   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0721 23:26:01.262423   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e ...
	I0721 23:26:01.262453   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e: {Name:mkd0ddade9e48636d5652f3537abe938ddee8ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.262637   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e ...
	I0721 23:26:01.262652   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e: {Name:mk14da2e09af673932b7e7c0725f59d34b59d820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.262750   13262 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt.46adf13e -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt
	I0721 23:26:01.262823   13262 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key.46adf13e -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key
	I0721 23:26:01.262869   13262 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key
	I0721 23:26:01.262884   13262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt with IP's: []
	I0721 23:26:01.370707   13262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt ...
	I0721 23:26:01.370737   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt: {Name:mkc7806d29165ead30a6309d111a88af9f1dabdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.370912   13262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key ...
	I0721 23:26:01.370925   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key: {Name:mk893899356780b66e17d05b51227314e0191484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:01.371110   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:26:01.371144   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:26:01.371167   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:26:01.371192   13262 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:26:01.371838   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:26:01.394875   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:26:01.416946   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:26:01.440095   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:26:01.481066   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0721 23:26:01.505081   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:26:01.526585   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:26:01.547703   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:26:01.568733   13262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:26:01.589751   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:26:01.604624   13262 ssh_runner.go:195] Run: openssl version
	I0721 23:26:01.609736   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:26:01.619191   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.623065   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.623109   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:26:01.628242   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:26:01.637878   13262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:26:01.641473   13262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:26:01.641520   13262 kubeadm.go:392] StartCluster: {Name:addons-688294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-688294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:26:01.641610   13262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:26:01.641663   13262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:26:01.679671   13262 cri.go:89] found id: ""
	I0721 23:26:01.679738   13262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 23:26:01.688518   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 23:26:01.696817   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 23:26:01.705137   13262 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 23:26:01.705155   13262 kubeadm.go:157] found existing configuration files:
	
	I0721 23:26:01.705202   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0721 23:26:01.713122   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 23:26:01.713168   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 23:26:01.721297   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0721 23:26:01.729198   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 23:26:01.729245   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 23:26:01.737455   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0721 23:26:01.745372   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 23:26:01.745430   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 23:26:01.753636   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0721 23:26:01.761447   13262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 23:26:01.761494   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 23:26:01.769504   13262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 23:26:01.933098   13262 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 23:26:12.713285   13262 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0721 23:26:12.713351   13262 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 23:26:12.713428   13262 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 23:26:12.713514   13262 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 23:26:12.713656   13262 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 23:26:12.713739   13262 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 23:26:12.715666   13262 out.go:204]   - Generating certificates and keys ...
	I0721 23:26:12.715743   13262 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 23:26:12.715812   13262 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 23:26:12.715915   13262 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0721 23:26:12.716007   13262 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0721 23:26:12.716098   13262 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0721 23:26:12.716171   13262 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0721 23:26:12.716257   13262 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0721 23:26:12.716433   13262 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-688294 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0721 23:26:12.716518   13262 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0721 23:26:12.716634   13262 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-688294 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0721 23:26:12.716690   13262 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0721 23:26:12.716749   13262 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0721 23:26:12.716788   13262 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0721 23:26:12.716837   13262 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 23:26:12.716900   13262 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 23:26:12.716978   13262 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0721 23:26:12.717036   13262 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 23:26:12.717104   13262 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 23:26:12.717158   13262 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 23:26:12.717253   13262 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 23:26:12.717353   13262 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 23:26:12.718797   13262 out.go:204]   - Booting up control plane ...
	I0721 23:26:12.718891   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 23:26:12.719010   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 23:26:12.719103   13262 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 23:26:12.719235   13262 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 23:26:12.719311   13262 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 23:26:12.719358   13262 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 23:26:12.719525   13262 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0721 23:26:12.719618   13262 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0721 23:26:12.719702   13262 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00119796s
	I0721 23:26:12.719794   13262 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0721 23:26:12.719877   13262 kubeadm.go:310] [api-check] The API server is healthy after 5.001951151s
	I0721 23:26:12.720002   13262 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 23:26:12.720122   13262 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 23:26:12.720189   13262 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 23:26:12.720362   13262 kubeadm.go:310] [mark-control-plane] Marking the node addons-688294 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 23:26:12.720440   13262 kubeadm.go:310] [bootstrap-token] Using token: b18roa.jlvyrt5y4dz1vq43
	I0721 23:26:12.722528   13262 out.go:204]   - Configuring RBAC rules ...
	I0721 23:26:12.722675   13262 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 23:26:12.722786   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 23:26:12.722932   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 23:26:12.723075   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 23:26:12.723206   13262 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 23:26:12.723312   13262 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 23:26:12.723456   13262 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 23:26:12.723519   13262 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 23:26:12.723589   13262 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 23:26:12.723604   13262 kubeadm.go:310] 
	I0721 23:26:12.723662   13262 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 23:26:12.723668   13262 kubeadm.go:310] 
	I0721 23:26:12.723758   13262 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 23:26:12.723771   13262 kubeadm.go:310] 
	I0721 23:26:12.723811   13262 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 23:26:12.723874   13262 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 23:26:12.723946   13262 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 23:26:12.723956   13262 kubeadm.go:310] 
	I0721 23:26:12.724023   13262 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 23:26:12.724030   13262 kubeadm.go:310] 
	I0721 23:26:12.724077   13262 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 23:26:12.724091   13262 kubeadm.go:310] 
	I0721 23:26:12.724166   13262 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 23:26:12.724264   13262 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 23:26:12.724354   13262 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 23:26:12.724363   13262 kubeadm.go:310] 
	I0721 23:26:12.724467   13262 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 23:26:12.724566   13262 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 23:26:12.724574   13262 kubeadm.go:310] 
	I0721 23:26:12.724684   13262 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b18roa.jlvyrt5y4dz1vq43 \
	I0721 23:26:12.724801   13262 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0721 23:26:12.724832   13262 kubeadm.go:310] 	--control-plane 
	I0721 23:26:12.724841   13262 kubeadm.go:310] 
	I0721 23:26:12.724951   13262 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 23:26:12.724960   13262 kubeadm.go:310] 
	I0721 23:26:12.725054   13262 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b18roa.jlvyrt5y4dz1vq43 \
	I0721 23:26:12.725165   13262 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0721 23:26:12.725175   13262 cni.go:84] Creating CNI manager for ""
	I0721 23:26:12.725181   13262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:26:12.727267   13262 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 23:26:12.728441   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 23:26:12.738324   13262 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 23:26:12.756061   13262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 23:26:12.756137   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-688294 minikube.k8s.io/updated_at=2024_07_21T23_26_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=addons-688294 minikube.k8s.io/primary=true
	I0721 23:26:12.756140   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:12.773146   13262 ops.go:34] apiserver oom_adj: -16
	I0721 23:26:12.880510   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:13.380756   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:13.881396   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:14.380973   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:14.880589   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:15.381294   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:15.880665   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:16.381287   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:16.881293   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:17.380571   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:17.881459   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:18.381067   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:18.881210   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:19.381160   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:19.881367   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:20.381228   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:20.880973   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:21.381263   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:21.880583   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:22.380931   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:22.881129   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:23.381226   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:23.880844   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:24.380559   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:24.881447   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:25.381473   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:25.880636   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:26.380612   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:26:26.471530   13262 kubeadm.go:1113] duration metric: took 13.715441733s to wait for elevateKubeSystemPrivileges
	I0721 23:26:26.471557   13262 kubeadm.go:394] duration metric: took 24.8300396s to StartCluster
	I0721 23:26:26.471576   13262 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:26.471703   13262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:26:26.472110   13262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:26.472298   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0721 23:26:26.472345   13262 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:26:26.472389   13262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0721 23:26:26.472474   13262 addons.go:69] Setting yakd=true in profile "addons-688294"
	I0721 23:26:26.472501   13262 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-688294"
	I0721 23:26:26.472514   13262 addons.go:69] Setting helm-tiller=true in profile "addons-688294"
	I0721 23:26:26.472531   13262 addons.go:234] Setting addon yakd=true in "addons-688294"
	I0721 23:26:26.472535   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-688294"
	I0721 23:26:26.472538   13262 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-688294"
	I0721 23:26:26.472547   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:26:26.472553   13262 addons.go:234] Setting addon helm-tiller=true in "addons-688294"
	I0721 23:26:26.472563   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472570   13262 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-688294"
	I0721 23:26:26.472597   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472597   13262 addons.go:69] Setting volcano=true in profile "addons-688294"
	I0721 23:26:26.472604   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472618   13262 addons.go:234] Setting addon volcano=true in "addons-688294"
	I0721 23:26:26.472638   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472683   13262 addons.go:69] Setting storage-provisioner=true in profile "addons-688294"
	I0721 23:26:26.472496   13262 addons.go:69] Setting cloud-spanner=true in profile "addons-688294"
	I0721 23:26:26.472703   13262 addons.go:234] Setting addon storage-provisioner=true in "addons-688294"
	I0721 23:26:26.472718   13262 addons.go:234] Setting addon cloud-spanner=true in "addons-688294"
	I0721 23:26:26.472723   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.472737   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473021   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473037   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473050   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473052   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473064   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473064   13262 addons.go:69] Setting volumesnapshots=true in profile "addons-688294"
	I0721 23:26:26.473071   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473076   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473084   13262 addons.go:234] Setting addon volumesnapshots=true in "addons-688294"
	I0721 23:26:26.473103   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473103   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.472510   13262 addons.go:69] Setting gcp-auth=true in profile "addons-688294"
	I0721 23:26:26.472506   13262 addons.go:69] Setting default-storageclass=true in profile "addons-688294"
	I0721 23:26:26.473139   13262 mustload.go:65] Loading cluster: addons-688294
	I0721 23:26:26.473145   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-688294"
	I0721 23:26:26.473287   13262 config.go:182] Loaded profile config "addons-688294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:26:26.472493   13262 addons.go:69] Setting registry=true in profile "addons-688294"
	I0721 23:26:26.473369   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473383   13262 addons.go:234] Setting addon registry=true in "addons-688294"
	I0721 23:26:26.472501   13262 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-688294"
	I0721 23:26:26.473393   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473411   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473420   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473439   13262 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-688294"
	I0721 23:26:26.473385   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473123   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472528   13262 addons.go:69] Setting metrics-server=true in profile "addons-688294"
	I0721 23:26:26.473549   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473552   13262 addons.go:234] Setting addon metrics-server=true in "addons-688294"
	I0721 23:26:26.473564   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473609   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473626   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472518   13262 addons.go:69] Setting inspektor-gadget=true in profile "addons-688294"
	I0721 23:26:26.473691   13262 addons.go:234] Setting addon inspektor-gadget=true in "addons-688294"
	I0721 23:26:26.472481   13262 addons.go:69] Setting ingress=true in profile "addons-688294"
	I0721 23:26:26.473736   13262 addons.go:234] Setting addon ingress=true in "addons-688294"
	I0721 23:26:26.473054   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473756   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473782   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.472513   13262 addons.go:69] Setting ingress-dns=true in profile "addons-688294"
	I0721 23:26:26.473875   13262 addons.go:234] Setting addon ingress-dns=true in "addons-688294"
	I0721 23:26:26.473915   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.473929   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.473953   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.473916   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474050   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474098   13262 out.go:177] * Verifying Kubernetes components...
	I0721 23:26:26.474011   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474463   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.474494   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474531   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474649   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474705   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474720   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474759   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.474850   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.474871   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.475532   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:26:26.493934   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0721 23:26:26.494159   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0721 23:26:26.495040   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.495202   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.495265   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46375
	I0721 23:26:26.495353   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0721 23:26:26.499182   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.499237   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.505458   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.505485   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.505669   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0721 23:26:26.505942   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.505962   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.506215   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.506301   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.506801   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.506820   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.506844   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.507307   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.507332   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.507368   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.507407   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.507669   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.507760   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.508336   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.508339   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.508373   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.508805   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.508822   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.509071   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.509091   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.509442   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.509653   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.510356   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0721 23:26:26.512066   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.512651   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.512675   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.514725   13262 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-688294"
	I0721 23:26:26.514766   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.515124   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.515142   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.519005   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.520042   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.520067   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.520423   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.520586   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.522249   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.522654   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.522674   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.534633   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0721 23:26:26.534808   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0721 23:26:26.535191   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.535689   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.535712   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.536040   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.536205   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.539001   13262 addons.go:234] Setting addon default-storageclass=true in "addons-688294"
	I0721 23:26:26.539043   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:26.539404   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.539440   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.539654   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0721 23:26:26.539797   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.542489   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34725
	I0721 23:26:26.542508   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0721 23:26:26.542498   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0721 23:26:26.542643   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0721 23:26:26.542657   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.542678   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.543106   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.543180   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.543199   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.543759   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.543800   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.544004   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.544103   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.544116   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.544462   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.544476   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.544536   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.545103   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.545144   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.545353   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.545432   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.545446   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.545811   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.545831   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.545880   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.545893   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.546333   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.546368   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.546652   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.546679   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.546752   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.546785   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.547347   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.547381   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.547911   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.547935   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.548406   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.549014   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.549047   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.555366   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0721 23:26:26.555887   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.556919   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.556937   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.557253   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.557829   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.557870   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.558059   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0721 23:26:26.558523   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.559237   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.559256   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.559523   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43653
	I0721 23:26:26.559852   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.560257   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.560275   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.560106   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.560617   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.560988   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.561023   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.561345   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0721 23:26:26.562003   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.562038   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.564908   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0721 23:26:26.565361   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.565854   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.565871   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.566143   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.566577   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.566625   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.566843   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0721 23:26:26.566944   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0721 23:26:26.567283   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.567355   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.567863   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.567884   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.568020   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.568031   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.568207   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.568424   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.568443   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.569020   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.570535   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.571061   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.572598   13262 out.go:177]   - Using image docker.io/registry:2.8.3
	I0721 23:26:26.572699   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0721 23:26:26.572737   13262 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0721 23:26:26.573097   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.573603   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.573621   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.573831   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0721 23:26:26.573846   13262 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0721 23:26:26.573863   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.573915   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.574088   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.574853   13262 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0721 23:26:26.576549   13262 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0721 23:26:26.576566   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0721 23:26:26.576584   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.577155   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I0721 23:26:26.577392   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.578074   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.578708   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.579483   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.579501   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.579562   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.579906   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.580323   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.580334   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.580349   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.580361   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.580496   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.580637   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.580765   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.581178   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.581239   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.581456   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.581619   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.581782   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.581910   13262 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0721 23:26:26.582020   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.582523   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.583141   13262 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:26:26.583156   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0721 23:26:26.583171   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.584748   13262 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0721 23:26:26.586331   13262 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:26:26.586351   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0721 23:26:26.586370   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.586547   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.587285   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.587304   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.587454   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.587624   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.587849   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.588462   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.589639   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.590228   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.590228   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.590268   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.590378   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.590505   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.590816   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.595496   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.596129   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.596148   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.596577   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.596801   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.598568   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0721 23:26:26.598979   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.599494   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.599541   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.599941   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.600372   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.600429   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0721 23:26:26.600563   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37177
	I0721 23:26:26.601311   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I0721 23:26:26.601724   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.602133   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0721 23:26:26.602663   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0721 23:26:26.602778   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603049   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603105   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.603125   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603184   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603727   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603743   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.603817   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.603940   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.603951   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.604002   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.604041   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604267   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604290   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.604387   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.604404   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.604267   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.604436   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.604836   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:26.604871   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:26.605685   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.605850   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.606045   13262 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0721 23:26:26.606271   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.606393   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.606415   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.606992   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.606798   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.607465   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.607705   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:26.607719   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:26.607908   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:26.607918   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:26.607926   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:26.607934   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:26.608187   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:26.608216   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:26.608224   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	W0721 23:26:26.608303   13262 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0721 23:26:26.608426   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0721 23:26:26.608433   13262 out.go:177]   - Using image docker.io/busybox:stable
	I0721 23:26:26.608480   13262 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0721 23:26:26.608495   13262 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0721 23:26:26.608761   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.609691   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.609705   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.609779   13262 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:26:26.609797   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0721 23:26:26.609814   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.609956   13262 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0721 23:26:26.609971   13262 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0721 23:26:26.609987   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.610040   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.610209   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.610349   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0721 23:26:26.610359   13262 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0721 23:26:26.610373   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.610637   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0721 23:26:26.610949   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.611410   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.611425   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.611635   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.612918   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.614129   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.614862   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.615278   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.615308   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.615279   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0721 23:26:26.615546   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.615806   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.615847   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.616048   13262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 23:26:26.616215   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.616727   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.616746   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.616917   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0721 23:26:26.616936   13262 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0721 23:26:26.616955   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.617005   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.617701   13262 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:26:26.617718   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 23:26:26.617733   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.617733   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.617708   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0721 23:26:26.617797   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.618022   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.618043   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.617936   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.618072   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.618095   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.618240   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.618486   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.618620   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.618636   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.618791   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.618836   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.618966   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.619079   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.619134   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.618759   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.620077   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.620542   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.620803   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.621559   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.621700   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.621742   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.621930   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.622108   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.622163   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.622305   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.622665   13262 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0721 23:26:26.623174   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.623629   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.623646   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.623840   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.624056   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.624215   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.624350   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.624406   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0721 23:26:26.624669   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0721 23:26:26.624680   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0721 23:26:26.624691   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.627086   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.627130   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:26.627551   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.627579   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.627774   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.627931   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.628130   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.628268   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.629330   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:26.630447   13262 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:26:26.630463   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0721 23:26:26.630477   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.631495   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0721 23:26:26.631891   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.631950   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35693
	I0721 23:26:26.632414   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.632526   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.632548   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.632849   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.632872   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.632877   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.633050   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.633243   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.633381   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.633432   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.634002   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.634023   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.634259   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.634404   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.634497   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0721 23:26:26.634646   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.634710   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.634820   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.634928   13262 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 23:26:26.634951   13262 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 23:26:26.634966   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.634930   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:26.635342   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.635474   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:26.635490   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:26.636043   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:26.636245   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:26.636895   13262 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0721 23:26:26.637919   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:26.638109   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.638124   13262 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0721 23:26:26.638137   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0721 23:26:26.638152   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.638458   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.638482   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.638659   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.638787   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.638924   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.639020   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.639331   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0721 23:26:26.640879   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0721 23:26:26.640958   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.641327   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.641365   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.641501   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.641659   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.641803   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.641944   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.643069   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0721 23:26:26.644430   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0721 23:26:26.645587   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0721 23:26:26.646629   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0721 23:26:26.647660   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0721 23:26:26.648640   13262 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0721 23:26:26.649677   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0721 23:26:26.649697   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0721 23:26:26.649729   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:26.652011   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.652335   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:26.652378   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:26.652500   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:26.652682   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:26.652838   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:26.652966   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:26.941798   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:26:26.941898   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0721 23:26:27.062799   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0721 23:26:27.062822   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0721 23:26:27.076780   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0721 23:26:27.076800   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0721 23:26:27.117270   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0721 23:26:27.117290   13262 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0721 23:26:27.120449   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0721 23:26:27.121771   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 23:26:27.130362   13262 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0721 23:26:27.130383   13262 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0721 23:26:27.152349   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:26:27.157160   13262 node_ready.go:35] waiting up to 6m0s for node "addons-688294" to be "Ready" ...
	I0721 23:26:27.159738   13262 node_ready.go:49] node "addons-688294" has status "Ready":"True"
	I0721 23:26:27.159755   13262 node_ready.go:38] duration metric: took 2.571307ms for node "addons-688294" to be "Ready" ...
	I0721 23:26:27.159763   13262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:26:27.165456   13262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:27.171925   13262 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0721 23:26:27.171940   13262 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0721 23:26:27.178595   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:26:27.222825   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0721 23:26:27.222854   13262 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0721 23:26:27.223252   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:26:27.229565   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0721 23:26:27.229581   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0721 23:26:27.267189   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:26:27.314262   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:26:27.332947   13262 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0721 23:26:27.332968   13262 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0721 23:26:27.339098   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0721 23:26:27.339115   13262 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0721 23:26:27.339495   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0721 23:26:27.339508   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0721 23:26:27.350012   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0721 23:26:27.350029   13262 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0721 23:26:27.356834   13262 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:26:27.356848   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0721 23:26:27.415202   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0721 23:26:27.415228   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0721 23:26:27.429474   13262 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:26:27.429496   13262 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0721 23:26:27.510530   13262 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0721 23:26:27.510566   13262 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0721 23:26:27.520849   13262 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:26:27.520868   13262 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0721 23:26:27.552669   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0721 23:26:27.552689   13262 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0721 23:26:27.555152   13262 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0721 23:26:27.555181   13262 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0721 23:26:27.568265   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:26:27.619240   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0721 23:26:27.619266   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0721 23:26:27.652926   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:26:27.660185   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:26:27.719997   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0721 23:26:27.720028   13262 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0721 23:26:27.745204   13262 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0721 23:26:27.745228   13262 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0721 23:26:27.804249   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0721 23:26:27.804271   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0721 23:26:27.819632   13262 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:26:27.819650   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0721 23:26:27.885452   13262 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:27.885478   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0721 23:26:27.920073   13262 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0721 23:26:27.920098   13262 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0721 23:26:27.995955   13262 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0721 23:26:27.995978   13262 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0721 23:26:28.133491   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:26:28.166251   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:28.250449   13262 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0721 23:26:28.250476   13262 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0721 23:26:28.279181   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0721 23:26:28.279201   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0721 23:26:28.510129   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0721 23:26:28.510152   13262 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0721 23:26:28.554887   13262 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:26:28.554906   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0721 23:26:28.755137   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0721 23:26:28.755172   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0721 23:26:28.794772   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:26:28.886681   13262 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.944747392s)
	I0721 23:26:28.886710   13262 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0721 23:26:28.989002   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0721 23:26:28.989024   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0721 23:26:29.177747   13262 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"False"
	I0721 23:26:29.241796   13262 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:26:29.241824   13262 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0721 23:26:29.320569   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.198768294s)
	I0721 23:26:29.320627   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320637   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.320705   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.200216529s)
	I0721 23:26:29.320748   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320764   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.320917   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.320961   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.320969   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.320985   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.320991   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.321007   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.321042   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.321050   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.321063   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.321071   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.321268   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.321281   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.322735   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.322753   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.322790   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.348919   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:29.348941   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:29.349217   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:29.349242   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:29.349249   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:29.390162   13262 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-688294" context rescaled to 1 replicas
	I0721 23:26:29.416536   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:26:31.193415   13262 pod_ready.go:102] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"False"
	I0721 23:26:31.424055   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.271673317s)
	I0721 23:26:31.424072   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.245414703s)
	I0721 23:26:31.424105   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424105   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424116   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424118   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424119   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.200836462s)
	I0721 23:26:31.424339   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424364   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424499   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424542   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424552   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424550   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.424597   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424599   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424624   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.424636   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424648   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.424655   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.424606   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424839   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.424873   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.424890   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.425032   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.425052   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.425059   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.426653   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.426667   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.426676   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.426683   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.426897   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.426927   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.426936   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.505375   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:31.505399   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:31.505665   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:31.505686   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:31.505718   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:31.699599   13262 pod_ready.go:92] pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.699624   13262 pod_ready.go:81] duration metric: took 4.534145821s for pod "coredns-7db6d8ff4d-gjb75" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.699637   13262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.723008   13262 pod_ready.go:92] pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.723027   13262 pod_ready.go:81] duration metric: took 23.384884ms for pod "coredns-7db6d8ff4d-wxvm9" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.723037   13262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.751710   13262 pod_ready.go:92] pod "etcd-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.751730   13262 pod_ready.go:81] duration metric: took 28.687782ms for pod "etcd-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.751739   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.813970   13262 pod_ready.go:92] pod "kube-apiserver-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.813989   13262 pod_ready.go:81] duration metric: took 62.243947ms for pod "kube-apiserver-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.813998   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.913601   13262 pod_ready.go:92] pod "kube-controller-manager-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:31.913629   13262 pod_ready.go:81] duration metric: took 99.623509ms for pod "kube-controller-manager-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:31.913643   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jcqpx" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.068960   13262 pod_ready.go:92] pod "kube-proxy-jcqpx" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:32.068982   13262 pod_ready.go:81] duration metric: took 155.331037ms for pod "kube-proxy-jcqpx" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.068991   13262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.471239   13262 pod_ready.go:92] pod "kube-scheduler-addons-688294" in "kube-system" namespace has status "Ready":"True"
	I0721 23:26:32.471263   13262 pod_ready.go:81] duration metric: took 402.264753ms for pod "kube-scheduler-addons-688294" in "kube-system" namespace to be "Ready" ...
	I0721 23:26:32.471273   13262 pod_ready.go:38] duration metric: took 5.311498447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:26:32.471291   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:26:32.471358   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:26:33.682884   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0721 23:26:33.682926   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:33.686394   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:33.686864   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:33.686907   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:33.687125   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:33.687330   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:33.687488   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:33.687651   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:33.919910   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0721 23:26:33.963095   13262 addons.go:234] Setting addon gcp-auth=true in "addons-688294"
	I0721 23:26:33.963142   13262 host.go:66] Checking if "addons-688294" exists ...
	I0721 23:26:33.963423   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:33.963451   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:33.979127   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0721 23:26:33.979688   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:33.980128   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:33.980152   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:33.980560   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:33.981042   13262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:26:33.981081   13262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:26:33.995402   13262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0721 23:26:33.995879   13262 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:26:33.996416   13262 main.go:141] libmachine: Using API Version  1
	I0721 23:26:33.996438   13262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:26:33.996762   13262 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:26:33.996953   13262 main.go:141] libmachine: (addons-688294) Calling .GetState
	I0721 23:26:33.998380   13262 main.go:141] libmachine: (addons-688294) Calling .DriverName
	I0721 23:26:33.998638   13262 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0721 23:26:33.998669   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHHostname
	I0721 23:26:34.001509   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:34.001944   13262 main.go:141] libmachine: (addons-688294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:13:11", ip: ""} in network mk-addons-688294: {Iface:virbr1 ExpiryTime:2024-07-22 00:25:47 +0000 UTC Type:0 Mac:52:54:00:58:13:11 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-688294 Clientid:01:52:54:00:58:13:11}
	I0721 23:26:34.001972   13262 main.go:141] libmachine: (addons-688294) DBG | domain addons-688294 has defined IP address 192.168.39.142 and MAC address 52:54:00:58:13:11 in network mk-addons-688294
	I0721 23:26:34.002112   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHPort
	I0721 23:26:34.002286   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHKeyPath
	I0721 23:26:34.002464   13262 main.go:141] libmachine: (addons-688294) Calling .GetSSHUsername
	I0721 23:26:34.002594   13262 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/addons-688294/id_rsa Username:docker}
	I0721 23:26:34.457904   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.190672777s)
	I0721 23:26:34.457959   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.457960   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.143666176s)
	I0721 23:26:34.457973   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.457994   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458008   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.457994   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.88969924s)
	I0721 23:26:34.458054   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458064   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.805102613s)
	I0721 23:26:34.458071   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458131   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.797906302s)
	I0721 23:26:34.458150   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458092   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458162   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458185   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458247   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.324726481s)
	I0721 23:26:34.458264   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458273   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458347   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.292060025s)
	W0721 23:26:34.458393   13262 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:26:34.458421   13262 retry.go:31] will retry after 287.426306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:26:34.458513   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.663707627s)
	I0721 23:26:34.458534   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458543   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458758   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458780   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458793   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458801   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458807   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458808   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458820   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458828   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458835   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458785   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458881   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458887   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458894   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458901   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.458935   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.458951   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.458957   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.458964   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.458971   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459004   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459019   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459028   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459035   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459042   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459271   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459303   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459313   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459321   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459330   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.459390   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.459419   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.459424   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.459430   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:34.459436   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:34.461076   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461108   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461114   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461118   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461136   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461142   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461230   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461248   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461258   13262 addons.go:475] Verifying addon metrics-server=true in "addons-688294"
	I0721 23:26:34.461264   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461312   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461319   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461326   13262 addons.go:475] Verifying addon ingress=true in "addons-688294"
	I0721 23:26:34.461647   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:34.461686   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461695   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461741   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461752   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461248   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:34.461975   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:34.461984   13262 addons.go:475] Verifying addon registry=true in "addons-688294"
	I0721 23:26:34.463163   13262 out.go:177] * Verifying ingress addon...
	I0721 23:26:34.463807   13262 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-688294 service yakd-dashboard -n yakd-dashboard
	
	I0721 23:26:34.463821   13262 out.go:177] * Verifying registry addon...
	I0721 23:26:34.465267   13262 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0721 23:26:34.466010   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0721 23:26:34.486007   13262 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0721 23:26:34.486036   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:34.498761   13262 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0721 23:26:34.498786   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:34.746521   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:26:34.977174   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:34.984565   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.206172   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.789587897s)
	I0721 23:26:35.206227   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:35.206248   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:35.206277   13262 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.207620315s)
	I0721 23:26:35.206234   13262 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.734851735s)
	I0721 23:26:35.206331   13262 api_server.go:72] duration metric: took 8.733952881s to wait for apiserver process to appear ...
	I0721 23:26:35.206346   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:26:35.206369   13262 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0721 23:26:35.206558   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:35.206620   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:35.206637   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:35.206654   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:35.206681   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:35.207138   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:35.207174   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:35.207189   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:35.207203   13262 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-688294"
	I0721 23:26:35.207908   13262 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:26:35.208732   13262 out.go:177] * Verifying csi-hostpath-driver addon...
	I0721 23:26:35.210120   13262 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0721 23:26:35.210884   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0721 23:26:35.211139   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0721 23:26:35.211160   13262 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0721 23:26:35.222500   13262 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0721 23:26:35.223794   13262 api_server.go:141] control plane version: v1.30.3
	I0721 23:26:35.223816   13262 api_server.go:131] duration metric: took 17.462329ms to wait for apiserver health ...
	I0721 23:26:35.223825   13262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:26:35.238659   13262 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0721 23:26:35.238679   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:35.265571   13262 system_pods.go:59] 19 kube-system pods found
	I0721 23:26:35.265598   13262 system_pods.go:61] "coredns-7db6d8ff4d-gjb75" [c86d3c78-58cc-447e-a5c9-52d4e4a20e1a] Running
	I0721 23:26:35.265603   13262 system_pods.go:61] "coredns-7db6d8ff4d-wxvm9" [2a1974fc-f711-4ee3-9ea9-0950557b6591] Running
	I0721 23:26:35.265609   13262 system_pods.go:61] "csi-hostpath-attacher-0" [40077e94-802d-420a-b455-ab737983b277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0721 23:26:35.265613   13262 system_pods.go:61] "csi-hostpath-resizer-0" [aecffca5-4e9e-4b3a-aa94-26595456d158] Pending
	I0721 23:26:35.265621   13262 system_pods.go:61] "csi-hostpathplugin-h5wsx" [c86e378b-c880-4595-8d6e-08e01fb0245d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0721 23:26:35.265625   13262 system_pods.go:61] "etcd-addons-688294" [856f9d44-b0a3-4b78-8036-e0d7c246f307] Running
	I0721 23:26:35.265630   13262 system_pods.go:61] "kube-apiserver-addons-688294" [9f5dff41-7d2a-4999-b1e3-d4d5fb9b6df9] Running
	I0721 23:26:35.265634   13262 system_pods.go:61] "kube-controller-manager-addons-688294" [8f0e109c-e220-4b7a-a2a6-31276fab4267] Running
	I0721 23:26:35.265639   13262 system_pods.go:61] "kube-ingress-dns-minikube" [3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0721 23:26:35.265644   13262 system_pods.go:61] "kube-proxy-jcqpx" [03cc3bb7-95da-48e2-9f10-bbc947e4f3ee] Running
	I0721 23:26:35.265651   13262 system_pods.go:61] "kube-scheduler-addons-688294" [392d1358-a63c-49c0-8f9e-98ba38f0847c] Running
	I0721 23:26:35.265658   13262 system_pods.go:61] "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0721 23:26:35.265666   13262 system_pods.go:61] "nvidia-device-plugin-daemonset-mqmww" [8f13b775-6ef2-4604-a624-4a861b5001b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0721 23:26:35.265674   13262 system_pods.go:61] "registry-656c9c8d9c-f6bxb" [8ed372bf-f96f-42fa-a8f1-eddc6650451c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0721 23:26:35.265685   13262 system_pods.go:61] "registry-proxy-2gnkd" [a7a0e03d-5c29-4e30-9118-ff8299b7ca06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0721 23:26:35.265695   13262 system_pods.go:61] "snapshot-controller-745499f584-jhgrt" [3b4f303a-68fb-4d26-bdf5-dfe540adffc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.265708   13262 system_pods.go:61] "snapshot-controller-745499f584-mc4vn" [ff9546b7-95c6-4243-82bb-356750d46a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.265714   13262 system_pods.go:61] "storage-provisioner" [e698e282-1395-4fd6-a797-6a0eb40bbabc] Running
	I0721 23:26:35.265722   13262 system_pods.go:61] "tiller-deploy-6677d64bcd-7tqs9" [c6255c6f-8301-451a-905c-7aabaac5493c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0721 23:26:35.265730   13262 system_pods.go:74] duration metric: took 41.899202ms to wait for pod list to return data ...
	I0721 23:26:35.265739   13262 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:26:35.278634   13262 default_sa.go:45] found service account: "default"
	I0721 23:26:35.278660   13262 default_sa.go:55] duration metric: took 12.914679ms for default service account to be created ...
	I0721 23:26:35.278670   13262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:26:35.290715   13262 system_pods.go:86] 19 kube-system pods found
	I0721 23:26:35.290739   13262 system_pods.go:89] "coredns-7db6d8ff4d-gjb75" [c86d3c78-58cc-447e-a5c9-52d4e4a20e1a] Running
	I0721 23:26:35.290745   13262 system_pods.go:89] "coredns-7db6d8ff4d-wxvm9" [2a1974fc-f711-4ee3-9ea9-0950557b6591] Running
	I0721 23:26:35.290755   13262 system_pods.go:89] "csi-hostpath-attacher-0" [40077e94-802d-420a-b455-ab737983b277] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0721 23:26:35.290761   13262 system_pods.go:89] "csi-hostpath-resizer-0" [aecffca5-4e9e-4b3a-aa94-26595456d158] Pending
	I0721 23:26:35.290778   13262 system_pods.go:89] "csi-hostpathplugin-h5wsx" [c86e378b-c880-4595-8d6e-08e01fb0245d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0721 23:26:35.290787   13262 system_pods.go:89] "etcd-addons-688294" [856f9d44-b0a3-4b78-8036-e0d7c246f307] Running
	I0721 23:26:35.290796   13262 system_pods.go:89] "kube-apiserver-addons-688294" [9f5dff41-7d2a-4999-b1e3-d4d5fb9b6df9] Running
	I0721 23:26:35.290801   13262 system_pods.go:89] "kube-controller-manager-addons-688294" [8f0e109c-e220-4b7a-a2a6-31276fab4267] Running
	I0721 23:26:35.290809   13262 system_pods.go:89] "kube-ingress-dns-minikube" [3a97d19a-bb6d-49c5-9b41-29af1b1fc3bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0721 23:26:35.290815   13262 system_pods.go:89] "kube-proxy-jcqpx" [03cc3bb7-95da-48e2-9f10-bbc947e4f3ee] Running
	I0721 23:26:35.290820   13262 system_pods.go:89] "kube-scheduler-addons-688294" [392d1358-a63c-49c0-8f9e-98ba38f0847c] Running
	I0721 23:26:35.290826   13262 system_pods.go:89] "metrics-server-c59844bb4-bstqh" [ae1f9397-4344-4d3c-a416-ee538fc6ae94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0721 23:26:35.290842   13262 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmww" [8f13b775-6ef2-4604-a624-4a861b5001b1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0721 23:26:35.290854   13262 system_pods.go:89] "registry-656c9c8d9c-f6bxb" [8ed372bf-f96f-42fa-a8f1-eddc6650451c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0721 23:26:35.290867   13262 system_pods.go:89] "registry-proxy-2gnkd" [a7a0e03d-5c29-4e30-9118-ff8299b7ca06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0721 23:26:35.290907   13262 system_pods.go:89] "snapshot-controller-745499f584-jhgrt" [3b4f303a-68fb-4d26-bdf5-dfe540adffc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.290918   13262 system_pods.go:89] "snapshot-controller-745499f584-mc4vn" [ff9546b7-95c6-4243-82bb-356750d46a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0721 23:26:35.290925   13262 system_pods.go:89] "storage-provisioner" [e698e282-1395-4fd6-a797-6a0eb40bbabc] Running
	I0721 23:26:35.290932   13262 system_pods.go:89] "tiller-deploy-6677d64bcd-7tqs9" [c6255c6f-8301-451a-905c-7aabaac5493c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0721 23:26:35.290941   13262 system_pods.go:126] duration metric: took 12.26527ms to wait for k8s-apps to be running ...
	I0721 23:26:35.290953   13262 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:26:35.291009   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:26:35.348144   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0721 23:26:35.348166   13262 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0721 23:26:35.396624   13262 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:26:35.396643   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0721 23:26:35.446797   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:26:35.470356   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:35.470637   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.718872   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:35.971783   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:35.972010   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.216166   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:36.299362   13262 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.008321294s)
	I0721 23:26:36.299381   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.552811786s)
	I0721 23:26:36.299401   13262 system_svc.go:56] duration metric: took 1.008444614s WaitForService to wait for kubelet
	I0721 23:26:36.299411   13262 kubeadm.go:582] duration metric: took 9.827035938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:26:36.299430   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.299439   13262 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:26:36.299447   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.299890   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.299910   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.299919   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.299928   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.300242   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.300264   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.300283   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:36.302799   13262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:26:36.302820   13262 node_conditions.go:123] node cpu capacity is 2
	I0721 23:26:36.302829   13262 node_conditions.go:105] duration metric: took 3.385045ms to run NodePressure ...
	I0721 23:26:36.302839   13262 start.go:241] waiting for startup goroutines ...
	I0721 23:26:36.503901   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:36.504507   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.749772   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:36.785906   13262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.33907151s)
	I0721 23:26:36.785981   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.786000   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.786254   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.786272   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.786283   13262 main.go:141] libmachine: Making call to close driver server
	I0721 23:26:36.786292   13262 main.go:141] libmachine: (addons-688294) Calling .Close
	I0721 23:26:36.786508   13262 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:26:36.786516   13262 main.go:141] libmachine: (addons-688294) DBG | Closing plugin on server side
	I0721 23:26:36.786525   13262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:26:36.788390   13262 addons.go:475] Verifying addon gcp-auth=true in "addons-688294"
	I0721 23:26:36.789894   13262 out.go:177] * Verifying gcp-auth addon...
	I0721 23:26:36.791877   13262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0721 23:26:36.838209   13262 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0721 23:26:36.838229   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:36.972170   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:36.974168   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.217753   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:37.295519   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:37.469966   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.471023   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:37.716522   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:37.795838   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:37.970797   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:37.971718   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.216682   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:38.298625   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:38.470593   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:38.470988   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.717740   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:38.795929   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:38.971439   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:38.971446   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:39.217135   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:39.294879   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:39.470241   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:39.470501   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:39.715880   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:39.795249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:39.971556   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:39.974334   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.216791   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:40.295027   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:40.471871   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:40.475493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.929131   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:40.930137   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:40.971195   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:40.972018   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.216640   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:41.295044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:41.471045   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.471453   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:41.715653   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:41.795675   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:41.969709   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:41.971389   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:42.221237   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:42.331310   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:42.471330   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:42.471559   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:42.719452   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:42.795961   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:42.969700   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:42.970769   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:43.251644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:43.296219   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:43.469587   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:43.471527   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:43.716224   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:43.795652   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:43.969693   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:43.970819   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:44.216281   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:44.295461   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:44.469374   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:44.470655   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:44.716626   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:44.796315   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:44.970483   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:44.970664   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:45.217280   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:45.295271   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:45.471355   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:45.471736   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:45.717034   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:45.795854   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:45.969942   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:45.972134   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.216149   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:46.295207   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:46.471756   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.472049   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:46.716238   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:46.795240   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:46.971580   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:46.971734   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.216740   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:47.295883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:47.469690   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.472260   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:47.716415   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:47.795659   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:47.970257   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:47.972481   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.216867   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:48.295276   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:48.471849   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.472002   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:48.716459   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:48.796076   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:48.974646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:48.974822   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:49.215956   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:49.294755   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:49.469361   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:49.471377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:49.717031   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:49.794924   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:49.971098   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:49.971694   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.216240   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:50.295914   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:50.471424   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:50.471609   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.719955   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:50.796369   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:50.971874   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:50.973265   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:51.216749   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:51.296176   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:51.470866   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:51.472368   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:51.716330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:51.795556   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:51.969198   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:51.971919   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:52.216938   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:52.295345   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:52.470813   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:52.470897   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:52.716354   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:52.795646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:52.970127   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:52.971798   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.217597   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:53.296237   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:53.469817   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:53.474498   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.716578   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:53.804889   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:53.971973   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:53.973925   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.225723   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:54.297257   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:54.471177   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:54.471870   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.715928   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:54.795370   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:54.971545   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:54.971867   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:55.216590   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:55.295953   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:55.470576   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:55.471044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:55.717086   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:55.800499   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:55.969801   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:55.972422   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:56.215704   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:56.295189   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:56.471074   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:56.473006   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:56.715765   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:56.796950   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:56.970788   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:56.973186   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:57.216353   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:57.295887   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:57.470672   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:57.471809   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:57.716036   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:57.795200   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:57.971219   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:57.971553   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:58.222015   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:58.295981   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:58.469632   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:58.471489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:58.716164   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:58.799232   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:58.972880   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:58.974318   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:59.215713   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:59.295827   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:59.470949   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:59.473251   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:26:59.716883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:26:59.801422   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:26:59.970170   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:26:59.971394   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:00.216070   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:00.295301   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:00.474941   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:00.475024   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:00.716314   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:00.795653   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:00.969619   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:00.971386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:01.217862   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:01.295489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:01.469934   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:01.472664   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:01.715898   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:01.795730   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:01.970171   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:01.970470   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.215957   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:02.295264   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:02.471978   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.472128   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:02.716840   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:02.796643   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:02.970646   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:02.971628   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.217013   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:03.295381   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:03.471307   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.471953   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:03.717411   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:03.795404   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:03.969240   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:03.970538   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:04.216053   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:04.296086   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:04.472596   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:04.473420   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:04.716875   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:04.795934   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:04.970704   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:04.970711   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:05.216417   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:05.295758   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:05.470738   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:05.472177   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:05.716463   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:05.796777   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:05.969502   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:05.971351   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.216507   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:06.295108   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:06.471567   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:06.471834   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.716761   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:06.794903   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:06.971436   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:06.971723   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.216389   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:07.295829   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:07.469871   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.472386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:07.716051   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:07.795411   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:07.970200   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:07.971317   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.215782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:08.295261   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:08.469606   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:08.471513   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.719714   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:08.796523   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:08.971186   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:08.971224   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.218278   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:09.295187   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:09.470412   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.472012   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:09.716543   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:09.795507   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:09.971281   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:09.971478   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.215680   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:10.294961   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:10.470265   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:10.470350   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.715480   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:10.796035   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:10.972202   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:10.972508   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.216179   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:11.296713   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:11.469451   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.471070   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:11.716921   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:11.795760   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:11.970117   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:11.971644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.435057   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:12.437734   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:12.469591   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:12.472680   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.716496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:12.796332   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:12.971027   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:12.971153   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:13.218985   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:13.295883   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:13.469847   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:13.472860   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:13.716649   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:13.794977   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:13.970727   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:13.970806   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:14.216605   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:14.295300   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:14.470052   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:14.470291   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:14.716152   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:14.796462   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:14.971502   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:14.971560   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:15.216720   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:15.295130   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:15.640612   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:15.641404   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:15.718330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:15.795810   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:15.971043   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:15.971211   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:16.216645   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:16.296694   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:16.469617   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:16.470833   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:16.716697   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:16.796346   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:16.970208   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:16.970216   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:17.215782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:17.295172   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:17.471023   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:17.472194   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:17.716658   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:17.794837   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:17.972431   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:17.973976   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.216849   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:18.296061   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:18.469736   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.471216   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:18.716435   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:18.796110   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:18.970212   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:18.970467   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.216267   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:19.295563   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:19.472297   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.472717   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:19.716125   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:19.795818   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:19.971038   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:19.971437   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.215636   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:20.296521   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:20.469309   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.470698   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:20.717299   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:20.795398   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:20.971455   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:20.971738   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:21.218971   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:21.296033   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:21.470138   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:21.472089   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:21.715514   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:21.795698   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:21.969507   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:21.971512   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:22.217044   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:22.295667   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:22.470013   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:22.472491   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:22.716229   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:22.795547   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:22.971361   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:22.973555   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.218798   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:23.295952   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:23.470787   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:23.471794   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.717782   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:23.794967   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:23.971377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:23.971940   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.216264   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:24.295490   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:24.469640   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.472079   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:24.715964   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:24.794965   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:24.970424   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:24.971540   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:25.216519   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:25.295424   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:25.469174   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:25.470358   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:25.715756   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:25.795192   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:25.971569   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:25.971644   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:27:26.215842   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:26.295970   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:26.471107   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:26.471604   13262 kapi.go:107] duration metric: took 52.005591215s to wait for kubernetes.io/minikube-addons=registry ...
	I0721 23:27:26.717307   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:26.796477   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:26.972689   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:27.222357   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:27.299496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:27.469374   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:27.716294   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:27.796213   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:27.970386   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:28.217451   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:28.297532   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:28.471723   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:28.719852   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:28.795124   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:28.970406   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:29.217300   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:29.296938   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:29.470002   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:29.716681   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:29.795284   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:29.971969   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:30.216402   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:30.295689   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:30.469802   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:30.716428   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:30.795854   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:30.969881   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:31.216137   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:31.297543   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:31.469229   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:31.716630   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:31.794810   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:31.969692   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:32.238174   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:32.406493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:32.471592   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:32.717249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:32.795916   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:32.970354   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:33.216518   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:33.295490   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:33.469430   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:33.716184   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:33.800622   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.416330   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.419614   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:34.421489   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:34.469388   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:34.716641   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:34.794948   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:34.969900   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:35.216554   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:35.295889   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:35.470330   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:35.717578   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:35.796377   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:35.970791   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:36.224894   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:36.296325   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:36.470644   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:36.719535   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:36.795443   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:36.970541   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:37.217122   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:37.295952   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:37.470176   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:37.721587   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:37.798527   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:37.970613   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:38.216309   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:38.295966   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:38.470646   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:38.716218   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:38.796020   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:38.970300   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:39.224327   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:39.302060   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:39.469649   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:39.716231   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:39.795749   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:39.969781   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:40.216496   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:40.295858   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:40.470659   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:40.730246   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:40.795940   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:40.970298   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:41.240753   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:41.297754   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:41.470107   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:41.717249   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:41.795788   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:41.969931   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:42.217848   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:42.295900   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:42.470816   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:42.718725   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:42.796392   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:42.970332   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:43.215893   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:43.295405   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:43.469806   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:43.716812   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:43.795741   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:43.970579   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:44.215908   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:44.295341   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:44.470543   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:44.716493   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:44.796116   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:44.970457   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:45.216862   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:45.295416   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:45.470624   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:45.717356   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:45.796167   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:45.970596   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:46.216304   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:46.295693   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:46.472854   13262 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:27:46.718301   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:46.795434   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:46.969008   13262 kapi.go:107] duration metric: took 1m12.503737006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0721 23:27:47.216822   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:47.295006   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:47.715974   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:47.795131   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:48.216244   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:48.295968   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:48.716696   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:48.795965   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:49.216473   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:49.296008   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:49.715950   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:49.795155   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:50.215963   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:50.295564   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:50.717601   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:50.795822   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:51.217524   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:51.295387   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:27:51.720770   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:51.802765   13262 kapi.go:107] duration metric: took 1m15.010883552s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0721 23:27:51.804163   13262 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-688294 cluster.
	I0721 23:27:51.805473   13262 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0721 23:27:51.806651   13262 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0721 23:27:52.216611   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:52.716080   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:53.221525   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:53.715386   13262 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:27:54.216544   13262 kapi.go:107] duration metric: took 1m19.005654586s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0721 23:27:54.218212   13262 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, storage-provisioner, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0721 23:27:54.219475   13262 addons.go:510] duration metric: took 1m27.747081657s for enable addons: enabled=[cloud-spanner default-storageclass ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller storage-provisioner metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0721 23:27:54.219522   13262 start.go:246] waiting for cluster config update ...
	I0721 23:27:54.219542   13262 start.go:255] writing updated cluster config ...
	I0721 23:27:54.219803   13262 ssh_runner.go:195] Run: rm -f paused
	I0721 23:27:54.269680   13262 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0721 23:27:54.271594   13262 out.go:177] * Done! kubectl is now configured to use "addons-688294" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.865190377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=224fcde2-61a5-4836-82b9-63faae37e870 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.866528372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dd3693b-7da9-48d3-b8af-765ae868b713 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.867941070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604809867906911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dd3693b-7da9-48d3-b8af-765ae868b713 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.868530204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3161be66-8f17-4b1f-87f3-a72eee2d4b40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.868592857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3161be66-8f17-4b1f-87f3-a72eee2d4b40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.868913122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17216044
32542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandbo
xId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d79
96262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&C
ontainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSandboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&Containe
rMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3161be66-8f17-4b1f-87f3-a72eee2d4b40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.907826555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c50eaea-9e1d-4f01-a598-0ffc80f6c2e7 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.907899053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c50eaea-9e1d-4f01-a598-0ffc80f6c2e7 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.909227431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1da5624-0711-4472-ba3a-b9c2569fb3a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.911928340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604809911856131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1da5624-0711-4472-ba3a-b9c2569fb3a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.915217560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e551cd02-9176-4fdb-8c6a-57f2b89d01f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.915667268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e551cd02-9176-4fdb-8c6a-57f2b89d01f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.916120681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17216044
32542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandbo
xId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d79
96262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&C
ontainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSandboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&Containe
rMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e551cd02-9176-4fdb-8c6a-57f2b89d01f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.938744479Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1cbf30e9-e8e8-4999-b888-8b481609a6bf name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.939063578Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-j4zfd,Uid:7e0542de-ebc0-4bf8-81fa-be127d873ed9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604633858562145,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:30:33.539359168Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:0b5aca8f-8b07-4191-ba5e-991bdee098bd,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1721604493662364915,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:28:13.353386757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&PodSandboxMetadata{Name:headlamp-7867546754-2gjtz,Uid:38892129-f578-47c0-8299-1968efa46c65,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604475832297494,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 38892129-f578-47c0-8299-1968efa46c65,pod-template-hash: 7867546754,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
07-21T23:27:55.220122430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-56jkt,Uid:6072e9b1-6994-4192-a4e0-48ae9b9edecc,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604461200405327,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:26:36.736964471Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-799879c74f-7mmml,Uid:a67c330a-d2bc-44b5-8cf9-8245a6e01af8,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1721604392999085039,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,pod-template-hash: 799879c74f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:26:32.638509406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-bstqh,Uid:ae1f9397-4344-4d3c-a416-ee538fc6ae94,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604392093325649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:26:31.779076148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e698e282-1395-4fd6-a797-6a0eb40bbabc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604391978178361,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-21T23:26:31.294131105Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxvm9,Uid:2a1974fc-f711-4ee3-9ea9-0950557b6591,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604386972677696,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-095055
7b6591,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:26:26.666590384Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&PodSandboxMetadata{Name:kube-proxy-jcqpx,Uid:03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604386767281000,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-21T23:26:26.449564671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:111269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addo
ns-688294,Uid:1af1326a0d6ed525f34cf1aab737348d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604366501977281,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1af1326a0d6ed525f34cf1aab737348d,kubernetes.io/config.seen: 2024-07-21T23:26:06.022400372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6f4e5b3ada85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-688294,Uid:adc84b0c10afcc2c17c70a4265c6d6c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604366497301650,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: adc84b0c10afcc2c17c70a4265c6d6c2,kubernetes.io/config.seen: 2024-07-21T23:26:06.022401665Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-688294,Uid:57c694666e586ab8e2ae8a2f8987d97f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604366485653038,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.142:8443,kubernetes.io/config.hash: 57c694666e586ab8e2ae8a2f8987d97f,kubernetes.io/config.
seen: 2024-07-21T23:26:06.022394648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3639e55de2a0b72f7f36302058d7996262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&PodSandboxMetadata{Name:etcd-addons-688294,Uid:03dfa3605f18775cc841f98db38b9796,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721604366478979237,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.142:2379,kubernetes.io/config.hash: 03dfa3605f18775cc841f98db38b9796,kubernetes.io/config.seen: 2024-07-21T23:26:06.022391208Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1cbf30e9-e8e8-4999-b888-8b481609a6bf name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.940081409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00761e04-7901-4159-8504-28a7e26f9b17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.940202030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00761e04-7901-4159-8504-28a7e26f9b17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.940487354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17216044
32542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandbo
xId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d79
96262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&C
ontainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSandboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&Containe
rMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00761e04-7901-4159-8504-28a7e26f9b17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.959299554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d61f6ac-ffb7-4ecc-b227-fed6d8ae836d name=/runtime.v1.RuntimeService/Version
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.959387294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d61f6ac-ffb7-4ecc-b227-fed6d8ae836d name=/runtime.v1.RuntimeService/Version
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.962971349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=423c825c-d2ee-4c2a-b070-6d288f543840 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.965752955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721604809965620430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=423c825c-d2ee-4c2a-b070-6d288f543840 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.966613138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8dfeac5-db1a-42fc-8246-c739a8ab0a47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.966685214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8dfeac5-db1a-42fc-8246-c739a8ab0a47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:33:29 addons-688294 crio[682]: time="2024-07-21 23:33:29.966986929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15eff6453b4ff2e0d8599ed441e82194044302e6c0e9a6a67ec75b8c42c5d30,PodSandboxId:6723880a77daa888c26e180b63b4c929395e73ea7ce018b20ff4a126af5f86e7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721604636392888052,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-j4zfd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e0542de-ebc0-4bf8-81fa-be127d873ed9,},Annotations:map[string]string{io.kubernetes.container.hash: 81bb068b,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00dc37072b53da80fa11489232575a6ad6e2250cde1c3e356514457b351ecdb,PodSandboxId:ee9bac8c8411d30426965c8ce5f2a58660119779e024547aa3b4e22a27ae9e1d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721604497444887954,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b5aca8f-8b07-4191-ba5e-991bdee098bd,},Annotations:map[string]string{io.kubernet
es.container.hash: 473150de,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11795e013590a0a6cc24c6aae8310fa0410ebc68f84706b8bb4050aaa15dda4b,PodSandboxId:e2d9f4cc3301f3bce3b473aa714e330a70da19f99d23d2084be0a58858b4a499,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721604482449212746,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-2gjtz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 38892129-f578-47c0-8299-1968efa46c65,},Annotations:map[string]string{io.kubernetes.container.hash: b264623,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5,PodSandboxId:0926d78b49df19034109ae2e58b0f379d6db5354dadb2bf634f8a9153fd6564c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721604471248712557,Labels:map[string]string{io.kubernetes.container.name: gcp-auth
,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-56jkt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6072e9b1-6994-4192-a4e0-48ae9b9edecc,},Annotations:map[string]string{io.kubernetes.container.hash: b184aad0,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bf19c7fa60a64b14e442b8ab9bd039cb633293bbd90afea968e3628b49c0596,PodSandboxId:7e2f0f95b9112d081336ca5e657829515a4410ebb17066373cbc2ea81d895ec3,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17216044
32542215746,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-7mmml,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a67c330a-d2bc-44b5-8cf9-8245a6e01af8,},Annotations:map[string]string{io.kubernetes.container.hash: 99ab0e0b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b098dcd64eb0df4a12eecc6985e68a85e16fb027bb0e608209b88492c70e954e,PodSandboxId:95ebd78cf6a80027888c4d405607de04828eca51113fc96233eb93713b372e85,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721604427359631527,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bstqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1f9397-4344-4d3c-a416-ee538fc6ae94,},Annotations:map[string]string{io.kubernetes.container.hash: 3587015b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f,PodSandboxId:7e607d1884ecdb9a2840076ec5b3b3f2dda187dedd971056b292da88015e8578,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721604392500607150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e698e282-1395-4fd6-a797-6a0eb40bbabc,},Annotations:map[string]string{io.kubernetes.container.hash: 92df3763,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999,PodSandboxId:514c9ecc5bf0db899743ee7029041bddb8dc387bbae6dd08b7af952139757335,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721604390043596816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wxvm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1974fc-f711-4ee3-9ea9-0950557b6591,},Annotations:map[string]string{io.kubernetes.container.hash: 726416f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b,PodSandbo
xId:43264f8b65dd3f4bb2a1a2f104fde6fbbca2337a2d23eac89553dfc8e26b32e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721604387307351550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jcqpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03cc3bb7-95da-48e2-9f10-bbc947e4f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: cf9d8b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea,PodSandboxId:3639e55de2a0b72f7f36302058d79
96262a1c3eb2d5dcc63f63ebd81bd42ecde,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721604367061133899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dfa3605f18775cc841f98db38b9796,},Annotations:map[string]string{io.kubernetes.container.hash: f78b0edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45,PodSandboxId:111269c2448497e5e04484046bcf5613d4dbe399d4578d9169f3ea4b1ba4e86c,Metadata:&C
ontainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721604367068874246,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af1326a0d6ed525f34cf1aab737348d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0,PodSandboxId:7aa27e092fce3794c551d50c0bc4f62650d72eed041ef71b44cf79ee33ea6946,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721604367055199919,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c694666e586ab8e2ae8a2f8987d97f,},Annotations:map[string]string{io.kubernetes.container.hash: f00d3253,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca,PodSandboxId:6f4e5b3ada85774236c8ce727eb3b237cd4bff2279029f08a59c8a73a84ac133,Metadata:&Containe
rMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721604366797931469,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-688294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc84b0c10afcc2c17c70a4265c6d6c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8dfeac5-db1a-42fc-8246-c739a8ab0a47 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d15eff6453b4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   6723880a77daa       hello-world-app-6778b5fc9f-j4zfd
	e00dc37072b53       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   ee9bac8c8411d       nginx
	11795e013590a       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   e2d9f4cc3301f       headlamp-7867546754-2gjtz
	ec10fe9c60534       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   0926d78b49df1       gcp-auth-5db96cd9b4-56jkt
	5bf19c7fa60a6       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   7e2f0f95b9112       yakd-dashboard-799879c74f-7mmml
	b098dcd64eb0d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   95ebd78cf6a80       metrics-server-c59844bb4-bstqh
	216918ce9b7bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   7e607d1884ecd       storage-provisioner
	2da0f54f48878       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   514c9ecc5bf0d       coredns-7db6d8ff4d-wxvm9
	c969bfef3f523       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   43264f8b65dd3       kube-proxy-jcqpx
	a75ceaeb4ab41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   111269c244849       kube-controller-manager-addons-688294
	b7922c57b9139       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   3639e55de2a0b       etcd-addons-688294
	cbddd19a5edd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   7aa27e092fce3       kube-apiserver-addons-688294
	fb3b0c0d0677e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   6f4e5b3ada857       kube-scheduler-addons-688294
	
	
	==> coredns [2da0f54f488787a87c32433dd03d9c7a4464dce0bc84b589f65e99b07587f999] <==
	[INFO] 10.244.0.7:44491 - 49631 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000442857s
	[INFO] 10.244.0.7:35435 - 56763 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142408s
	[INFO] 10.244.0.7:35435 - 43452 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104318s
	[INFO] 10.244.0.7:42446 - 28053 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089944s
	[INFO] 10.244.0.7:42446 - 52888 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177134s
	[INFO] 10.244.0.7:40235 - 46989 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105607s
	[INFO] 10.244.0.7:40235 - 35211 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00023455s
	[INFO] 10.244.0.7:58190 - 17849 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088716s
	[INFO] 10.244.0.7:58190 - 60855 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044261s
	[INFO] 10.244.0.7:46913 - 7652 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000057194s
	[INFO] 10.244.0.7:46913 - 13282 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034767s
	[INFO] 10.244.0.7:46291 - 42719 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041392s
	[INFO] 10.244.0.7:46291 - 16833 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034606s
	[INFO] 10.244.0.7:35794 - 28979 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124279s
	[INFO] 10.244.0.7:35794 - 47437 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118998s
	[INFO] 10.244.0.22:60002 - 20685 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307511s
	[INFO] 10.244.0.22:49713 - 47982 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143484s
	[INFO] 10.244.0.22:46007 - 19868 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180907s
	[INFO] 10.244.0.22:39339 - 65062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139681s
	[INFO] 10.244.0.22:58851 - 4840 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126748s
	[INFO] 10.244.0.22:33782 - 53305 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000195113s
	[INFO] 10.244.0.22:43856 - 8947 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000556053s
	[INFO] 10.244.0.22:46305 - 39605 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000583225s
	[INFO] 10.244.0.25:37615 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321788s
	[INFO] 10.244.0.25:48006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000219016s
	
	
	==> describe nodes <==
	Name:               addons-688294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-688294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=addons-688294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_26_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-688294
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:26:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-688294
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:33:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:30:47 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:30:47 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:30:47 +0000   Sun, 21 Jul 2024 23:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:30:47 +0000   Sun, 21 Jul 2024 23:26:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-688294
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 431e62ca24b4445b82feed907221d613
	  System UUID:                431e62ca-24b4-445b-82fe-ed907221d613
	  Boot ID:                    f5af4e40-e7a5-42da-a1c1-a4ffed10427f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-j4zfd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  gcp-auth                    gcp-auth-5db96cd9b4-56jkt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  headlamp                    headlamp-7867546754-2gjtz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 coredns-7db6d8ff4d-wxvm9                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m4s
	  kube-system                 etcd-addons-688294                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-apiserver-addons-688294             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-controller-manager-addons-688294    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-proxy-jcqpx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-scheduler-addons-688294             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 metrics-server-c59844bb4-bstqh           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m59s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  yakd-dashboard              yakd-dashboard-799879c74f-7mmml          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m2s   kube-proxy       
	  Normal  Starting                 7m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m18s  kubelet          Node addons-688294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s  kubelet          Node addons-688294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s  kubelet          Node addons-688294 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m17s  kubelet          Node addons-688294 status is now: NodeReady
	  Normal  RegisteredNode           7m5s   node-controller  Node addons-688294 event: Registered Node addons-688294 in Controller
	
	
	==> dmesg <==
	[  +9.203854] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[  +5.215554] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.049865] kauditd_printk_skb: 163 callbacks suppressed
	[  +6.544472] kauditd_printk_skb: 36 callbacks suppressed
	[Jul21 23:27] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.659508] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.960010] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.211945] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.300505] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.196229] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.182157] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.052138] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.768790] kauditd_printk_skb: 47 callbacks suppressed
	[Jul21 23:28] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.611592] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.521517] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.195302] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.025632] kauditd_printk_skb: 9 callbacks suppressed
	[ +30.049239] kauditd_printk_skb: 23 callbacks suppressed
	[Jul21 23:29] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.133264] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.641905] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.326491] kauditd_printk_skb: 33 callbacks suppressed
	[Jul21 23:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.227683] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b7922c57b9139f00b1d11e9d4cb3c435d10e0385f96da2f8e37b4fd1f8c219ea] <==
	{"level":"warn","ts":"2024-07-21T23:27:34.403625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"490.775677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-21T23:27:34.403666Z","caller":"traceutil/trace.go:171","msg":"trace[688219142] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1041; }","duration":"490.841201ms","start":"2024-07-21T23:27:33.912817Z","end":"2024-07-21T23:27:34.403658Z","steps":["trace[688219142] 'count revisions from in-memory index tree'  (duration: 490.730217ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.403687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:27:33.912799Z","time spent":"490.881389ms","remote":"127.0.0.1:59532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":21,"response size":31,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true "}
	{"level":"warn","ts":"2024-07-21T23:27:34.403749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.028631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-07-21T23:27:34.403848Z","caller":"traceutil/trace.go:171","msg":"trace[1474776033] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1041; }","duration":"117.15471ms","start":"2024-07-21T23:27:34.286685Z","end":"2024-07-21T23:27:34.403839Z","steps":["trace[1474776033] 'range keys from in-memory index tree'  (duration: 116.618975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.40382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.963641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-21T23:27:34.404109Z","caller":"traceutil/trace.go:171","msg":"trace[214288561] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1041; }","duration":"442.273326ms","start":"2024-07-21T23:27:33.961827Z","end":"2024-07-21T23:27:34.4041Z","steps":["trace[214288561] 'range keys from in-memory index tree'  (duration: 441.87972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:27:34.404241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:27:33.96181Z","time spent":"442.417029ms","remote":"127.0.0.1:59420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14387,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-07-21T23:28:01.363788Z","caller":"traceutil/trace.go:171","msg":"trace[1999037915] linearizableReadLoop","detail":"{readStateIndex:1252; appliedIndex:1251; }","duration":"158.291381ms","start":"2024-07-21T23:28:01.205475Z","end":"2024-07-21T23:28:01.363767Z","steps":["trace[1999037915] 'read index received'  (duration: 158.160994ms)","trace[1999037915] 'applied index is now lower than readState.Index'  (duration: 129.865µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-21T23:28:01.364185Z","caller":"traceutil/trace.go:171","msg":"trace[1446860162] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1216; }","duration":"352.618638ms","start":"2024-07-21T23:28:01.011509Z","end":"2024-07-21T23:28:01.364127Z","steps":["trace[1446860162] 'process raft request'  (duration: 352.165667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:01.364358Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:28:01.011496Z","time spent":"352.757616ms","remote":"127.0.0.1:59658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:871 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"warn","ts":"2024-07-21T23:28:01.364622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.152365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-21T23:28:01.365662Z","caller":"traceutil/trace.go:171","msg":"trace[1914586065] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1216; }","duration":"160.214386ms","start":"2024-07-21T23:28:01.205439Z","end":"2024-07-21T23:28:01.365653Z","steps":["trace[1914586065] 'agreement among raft nodes before linearized reading'  (duration: 159.07186ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:01.365193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.77129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.142\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-21T23:28:01.368999Z","caller":"traceutil/trace.go:171","msg":"trace[2032890233] range","detail":"{range_begin:/registry/masterleases/192.168.39.142; range_end:; response_count:1; response_revision:1216; }","duration":"110.598297ms","start":"2024-07-21T23:28:01.25839Z","end":"2024-07-21T23:28:01.368988Z","steps":["trace[2032890233] 'agreement among raft nodes before linearized reading'  (duration: 106.707832ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:28:22.3534Z","caller":"traceutil/trace.go:171","msg":"trace[372063583] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"288.013913ms","start":"2024-07-21T23:28:22.065359Z","end":"2024-07-21T23:28:22.353373Z","steps":["trace[372063583] 'process raft request'  (duration: 287.566862ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:28:22.35388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.83428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:28:22.354132Z","caller":"traceutil/trace.go:171","msg":"trace[1984484969] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1402; }","duration":"224.134616ms","start":"2024-07-21T23:28:22.129987Z","end":"2024-07-21T23:28:22.354122Z","steps":["trace[1984484969] 'agreement among raft nodes before linearized reading'  (duration: 223.838223ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:28:22.353776Z","caller":"traceutil/trace.go:171","msg":"trace[183201219] linearizableReadLoop","detail":"{readStateIndex:1447; appliedIndex:1446; }","duration":"223.107323ms","start":"2024-07-21T23:28:22.130013Z","end":"2024-07-21T23:28:22.35312Z","steps":["trace[183201219] 'read index received'  (duration: 222.972893ms)","trace[183201219] 'applied index is now lower than readState.Index'  (duration: 133.751µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-21T23:28:22.355349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.578485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-21T23:28:22.355436Z","caller":"traceutil/trace.go:171","msg":"trace[1840188235] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1402; }","duration":"167.690355ms","start":"2024-07-21T23:28:22.187738Z","end":"2024-07-21T23:28:22.355428Z","steps":["trace[1840188235] 'agreement among raft nodes before linearized reading'  (duration: 166.761075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:29:07.032367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.801652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9302"}
	{"level":"info","ts":"2024-07-21T23:29:07.032444Z","caller":"traceutil/trace.go:171","msg":"trace[1411897326] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1549; }","duration":"311.914818ms","start":"2024-07-21T23:29:06.720505Z","end":"2024-07-21T23:29:07.03242Z","steps":["trace[1411897326] 'range keys from in-memory index tree'  (duration: 311.612857ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:29:07.032485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:29:06.720462Z","time spent":"312.012186ms","remote":"127.0.0.1:59420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":9326,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-21T23:29:12.69364Z","caller":"traceutil/trace.go:171","msg":"trace[1503670051] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"152.311423ms","start":"2024-07-21T23:29:12.541308Z","end":"2024-07-21T23:29:12.693619Z","steps":["trace[1503670051] 'process raft request'  (duration: 152.002057ms)"],"step_count":1}
	
	
	==> gcp-auth [ec10fe9c60534cd4719d699ec276725ea6aa808d05bde5a847836b3d6e95aee5] <==
	2024/07/21 23:27:51 GCP Auth Webhook started!
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:27:55 Ready to marshal response ...
	2024/07/21 23:27:55 Ready to write response ...
	2024/07/21 23:28:00 Ready to marshal response ...
	2024/07/21 23:28:00 Ready to write response ...
	2024/07/21 23:28:06 Ready to marshal response ...
	2024/07/21 23:28:06 Ready to write response ...
	2024/07/21 23:28:13 Ready to marshal response ...
	2024/07/21 23:28:13 Ready to write response ...
	2024/07/21 23:28:19 Ready to marshal response ...
	2024/07/21 23:28:19 Ready to write response ...
	2024/07/21 23:28:19 Ready to marshal response ...
	2024/07/21 23:28:19 Ready to write response ...
	2024/07/21 23:28:31 Ready to marshal response ...
	2024/07/21 23:28:31 Ready to write response ...
	2024/07/21 23:28:59 Ready to marshal response ...
	2024/07/21 23:28:59 Ready to write response ...
	2024/07/21 23:29:31 Ready to marshal response ...
	2024/07/21 23:29:31 Ready to write response ...
	2024/07/21 23:30:33 Ready to marshal response ...
	2024/07/21 23:30:33 Ready to write response ...
	
	
	==> kernel <==
	 23:33:30 up 7 min,  0 users,  load average: 0.05, 0.62, 0.47
	Linux addons-688294 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cbddd19a5edd22cea77e671d8b69e53eac4f429920e77d04dbf06843304bb6d0] <==
	W0721 23:28:12.190944       1 handler_proxy.go:93] no RequestInfo found in the context
	E0721 23:28:12.191015       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0721 23:28:12.191714       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.108.25:443: connect: connection refused
	E0721 23:28:12.193399       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.108.25:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.108.25:443: connect: connection refused
	I0721 23:28:12.264274       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0721 23:28:13.211526       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0721 23:28:13.399045       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.156.89"}
	I0721 23:28:14.225086       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0721 23:28:15.260435       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0721 23:28:47.244893       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0721 23:29:13.776721       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0721 23:29:46.396661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.397637       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.424564       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.424609       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.442001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.442049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0721 23:29:46.453866       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0721 23:29:46.453952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0721 23:29:47.426197       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0721 23:29:47.454108       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0721 23:29:47.481875       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0721 23:30:33.686824       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.233.65"}
	E0721 23:30:35.942526       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a75ceaeb4ab41339398a0cee66e7a13e30ce8f0543200c66b0c81fbfc71e8e45] <==
	W0721 23:31:08.963753       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:31:08.963815       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:31:35.016648       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:31:35.016745       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:31:44.455284       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:31:44.455338       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:31:51.487803       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:31:51.487929       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:31:56.552754       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:31:56.552868       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:32:28.247884       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:32:28.247945       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:32:34.469924       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:32:34.469970       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:32:37.589750       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:32:37.589899       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:32:51.093585       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:32:51.093652       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:33:10.540466       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:33:10.540630       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:33:17.179301       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:33:17.179594       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0721 23:33:23.479850       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0721 23:33:23.479889       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0721 23:33:28.919418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.441µs"
	
	
	==> kube-proxy [c969bfef3f523281aaca87bb686017810ad5369caa22f2aaf3c61d00728f4e6b] <==
	I0721 23:26:28.077713       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:26:28.089017       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	I0721 23:26:28.219692       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:26:28.219739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:26:28.219755       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:26:28.224119       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:26:28.224343       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:26:28.224364       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:26:28.225217       1 config.go:319] "Starting node config controller"
	I0721 23:26:28.225236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:26:28.225464       1 config.go:192] "Starting service config controller"
	I0721 23:26:28.225473       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:26:28.225493       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:26:28.225497       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:26:28.325936       1 shared_informer.go:320] Caches are synced for node config
	I0721 23:26:28.325953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:26:28.325964       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fb3b0c0d0677ecee63a204b386d3d9f4ff8a5d981e988b5bc69b2b331496ecca] <==
	W0721 23:26:09.648718       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:26:09.648748       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:26:09.648807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:09.648831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:09.650009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:26:09.650042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0721 23:26:10.452907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0721 23:26:10.452958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0721 23:26:10.514126       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:26:10.514215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:26:10.606629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.606756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:10.674409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0721 23:26:10.674454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0721 23:26:10.675246       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:26:10.675305       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:26:10.714219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.714330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:26:10.785423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:26:10.785554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0721 23:26:10.824394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:26:10.824440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0721 23:26:10.884238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0721 23:26:10.884282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0721 23:26:13.441723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 21 23:30:39 addons-688294 kubelet[1273]: I0721 23:30:39.287659    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13"} err="failed to get container status \"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13\": rpc error: code = NotFound desc = could not find container \"1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13\": container with ID starting with 1ecee29fe20d601f78590e273180cb36995490b94d687f548c60be1735e54d13 not found: ID does not exist"
	Jul 21 23:30:40 addons-688294 kubelet[1273]: I0721 23:30:40.000292    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd164e0-81d7-4889-8624-214c83da34d7" path="/var/lib/kubelet/pods/1dd164e0-81d7-4889-8624-214c83da34d7/volumes"
	Jul 21 23:31:12 addons-688294 kubelet[1273]: E0721 23:31:12.015756    1273 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:31:12 addons-688294 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:31:12 addons-688294 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:31:12 addons-688294 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:31:12 addons-688294 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:31:12 addons-688294 kubelet[1273]: I0721 23:31:12.936491    1273 scope.go:117] "RemoveContainer" containerID="ac006a39a11cbc7ce6f68d1c7e7114fe1ddcb4bee444dcfa3ef43edb205e4628"
	Jul 21 23:31:12 addons-688294 kubelet[1273]: I0721 23:31:12.955267    1273 scope.go:117] "RemoveContainer" containerID="53d526fcd9f9fe0c3595afd522eca1205c12481098df67cc34a06379fc7ecab0"
	Jul 21 23:32:12 addons-688294 kubelet[1273]: E0721 23:32:12.016903    1273 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:32:12 addons-688294 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:32:12 addons-688294 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:32:12 addons-688294 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:32:12 addons-688294 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:33:12 addons-688294 kubelet[1273]: E0721 23:33:12.015610    1273 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:33:12 addons-688294 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:33:12 addons-688294 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:33:12 addons-688294 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:33:12 addons-688294 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.370424    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae1f9397-4344-4d3c-a416-ee538fc6ae94-tmp-dir\") pod \"ae1f9397-4344-4d3c-a416-ee538fc6ae94\" (UID: \"ae1f9397-4344-4d3c-a416-ee538fc6ae94\") "
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.370495    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2b7b\" (UniqueName: \"kubernetes.io/projected/ae1f9397-4344-4d3c-a416-ee538fc6ae94-kube-api-access-z2b7b\") pod \"ae1f9397-4344-4d3c-a416-ee538fc6ae94\" (UID: \"ae1f9397-4344-4d3c-a416-ee538fc6ae94\") "
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.371265    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1f9397-4344-4d3c-a416-ee538fc6ae94-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "ae1f9397-4344-4d3c-a416-ee538fc6ae94" (UID: "ae1f9397-4344-4d3c-a416-ee538fc6ae94"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.373708    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1f9397-4344-4d3c-a416-ee538fc6ae94-kube-api-access-z2b7b" (OuterVolumeSpecName: "kube-api-access-z2b7b") pod "ae1f9397-4344-4d3c-a416-ee538fc6ae94" (UID: "ae1f9397-4344-4d3c-a416-ee538fc6ae94"). InnerVolumeSpecName "kube-api-access-z2b7b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.472772    1273 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae1f9397-4344-4d3c-a416-ee538fc6ae94-tmp-dir\") on node \"addons-688294\" DevicePath \"\""
	Jul 21 23:33:30 addons-688294 kubelet[1273]: I0721 23:33:30.472801    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z2b7b\" (UniqueName: \"kubernetes.io/projected/ae1f9397-4344-4d3c-a416-ee538fc6ae94-kube-api-access-z2b7b\") on node \"addons-688294\" DevicePath \"\""
	
	
	==> storage-provisioner [216918ce9b7bbc2ae42421b5a53f7d188c1ab874575b496710855e7fc763457f] <==
	I0721 23:26:32.965409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0721 23:26:32.989049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0721 23:26:32.989213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0721 23:26:33.013356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0721 23:26:33.013501       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-688294_31961987-1828-4958-ba43-c9112c88d31d!
	I0721 23:26:33.014273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e6cc932-54bb-49fd-b538-3f5ffac98293", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-688294_31961987-1828-4958-ba43-c9112c88d31d became leader
	I0721 23:26:33.114236       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-688294_31961987-1828-4958-ba43-c9112c88d31d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-688294 -n addons-688294
helpers_test.go:261: (dbg) Run:  kubectl --context addons-688294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (322.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-688294
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-688294: exit status 82 (2m0.440328228s)

                                                
                                                
-- stdout --
	* Stopping node "addons-688294"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-688294" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-688294
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-688294: exit status 11 (21.673965123s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-688294" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-688294
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-688294: exit status 11 (6.144083331s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-688294" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-688294
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-688294: exit status 11 (6.143114966s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-688294" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image rm kicbase/echo-server:functional-135358 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
functional_test.go:402: expected "kicbase/echo-server:functional-135358" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi kicbase/echo-server:functional-135358
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image save --daemon kicbase/echo-server:functional-135358 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 image save --daemon kicbase/echo-server:functional-135358 --alsologtostderr: exit status 80 (4.592170595s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:40:20.169784   22659 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:20.170107   22659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:20.170117   22659 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:20.170121   22659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:20.170305   22659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:20.170858   22659 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:20.170882   22659 cache_images.go:402] Save images: ["kicbase/echo-server:functional-135358"]
	I0721 23:40:20.170966   22659 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:20.171306   22659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:20.171341   22659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:20.185864   22659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0721 23:40:20.186254   22659 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:20.186850   22659 main.go:141] libmachine: Using API Version  1
	I0721 23:40:20.186872   22659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:20.187171   22659 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:20.187365   22659 main.go:141] libmachine: (functional-135358) Calling .GetState
	I0721 23:40:20.189483   22659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:20.189528   22659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:20.205325   22659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0721 23:40:20.205782   22659 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:20.206325   22659 main.go:141] libmachine: Using API Version  1
	I0721 23:40:20.206353   22659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:20.206705   22659 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:20.206981   22659 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:20.207119   22659 cache_images.go:347] SaveCachedImages start: [kicbase/echo-server:functional-135358]
	I0721 23:40:20.207240   22659 ssh_runner.go:195] Run: systemctl --version
	I0721 23:40:20.207268   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
	I0721 23:40:20.209762   22659 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
	I0721 23:40:20.210142   22659 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
	I0721 23:40:20.210170   22659 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
	I0721 23:40:20.210275   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
	I0721 23:40:20.210447   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
	I0721 23:40:20.210584   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
	I0721 23:40:20.210766   22659 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
	I0721 23:40:20.321709   22659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} kicbase/echo-server:functional-135358
	I0721 23:40:24.395546   22659 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} kicbase/echo-server:functional-135358: (4.07379868s)
	I0721 23:40:24.395588   22659 cache_images.go:484] Saving image to: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358
	I0721 23:40:24.395685   22659 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.407960   22659 crio.go:290] Saving image kicbase/echo-server:functional-135358: /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.408054   22659 ssh_runner.go:195] Run: sudo podman save kicbase/echo-server:functional-135358 -o /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.677688   22659 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.681898   22659 ssh_runner.go:447] scp /var/lib/minikube/images/echo-server_functional-135358 --> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358 (4950016 bytes)
	I0721 23:40:24.711005   22659 cache_images.go:516] Transferred and saved /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358 to cache
	I0721 23:40:24.711033   22659 cache_images.go:365] Successfully saved all cached images
	I0721 23:40:24.711039   22659 cache_images.go:351] duration metric: took 4.503905247s to SaveCachedImages
	I0721 23:40:24.711047   22659 cache_images.go:456] succeeded pulling from : functional-135358
	I0721 23:40:24.711052   22659 cache_images.go:457] failed pulling from : 
	I0721 23:40:24.711078   22659 main.go:141] libmachine: Making call to close driver server
	I0721 23:40:24.711094   22659 main.go:141] libmachine: (functional-135358) Calling .Close
	I0721 23:40:24.711365   22659 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
	I0721 23:40:24.711413   22659 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:40:24.711425   22659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:40:24.711439   22659 main.go:141] libmachine: Making call to close driver server
	I0721 23:40:24.711452   22659 main.go:141] libmachine: (functional-135358) Calling .Close
	I0721 23:40:24.711688   22659 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
	I0721 23:40:24.711716   22659 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:40:24.711732   22659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:40:24.713888   22659 out.go:177] 
	W0721 23:40:24.715119   22659 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: tag kicbase/echo-server:functional-135358 not found in tarball
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: tag kicbase/echo-server:functional-135358 not found in tarball
	W0721 23:40:24.715132   22659 out.go:239] * 
	* 
	W0721 23:40:24.717011   22659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:40:24.718311   22659 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:40:20.169784   22659 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:20.170107   22659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:20.170117   22659 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:20.170121   22659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:20.170305   22659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:20.170858   22659 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:20.170882   22659 cache_images.go:402] Save images: ["kicbase/echo-server:functional-135358"]
	I0721 23:40:20.170966   22659 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:20.171306   22659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:20.171341   22659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:20.185864   22659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0721 23:40:20.186254   22659 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:20.186850   22659 main.go:141] libmachine: Using API Version  1
	I0721 23:40:20.186872   22659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:20.187171   22659 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:20.187365   22659 main.go:141] libmachine: (functional-135358) Calling .GetState
	I0721 23:40:20.189483   22659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:20.189528   22659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:20.205325   22659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0721 23:40:20.205782   22659 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:20.206325   22659 main.go:141] libmachine: Using API Version  1
	I0721 23:40:20.206353   22659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:20.206705   22659 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:20.206981   22659 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:20.207119   22659 cache_images.go:347] SaveCachedImages start: [kicbase/echo-server:functional-135358]
	I0721 23:40:20.207240   22659 ssh_runner.go:195] Run: systemctl --version
	I0721 23:40:20.207268   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
	I0721 23:40:20.209762   22659 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
	I0721 23:40:20.210142   22659 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
	I0721 23:40:20.210170   22659 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
	I0721 23:40:20.210275   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
	I0721 23:40:20.210447   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
	I0721 23:40:20.210584   22659 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
	I0721 23:40:20.210766   22659 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
	I0721 23:40:20.321709   22659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} kicbase/echo-server:functional-135358
	I0721 23:40:24.395546   22659 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} kicbase/echo-server:functional-135358: (4.07379868s)
	I0721 23:40:24.395588   22659 cache_images.go:484] Saving image to: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358
	I0721 23:40:24.395685   22659 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.407960   22659 crio.go:290] Saving image kicbase/echo-server:functional-135358: /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.408054   22659 ssh_runner.go:195] Run: sudo podman save kicbase/echo-server:functional-135358 -o /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.677688   22659 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/images/echo-server_functional-135358
	I0721 23:40:24.681898   22659 ssh_runner.go:447] scp /var/lib/minikube/images/echo-server_functional-135358 --> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358 (4950016 bytes)
	I0721 23:40:24.711005   22659 cache_images.go:516] Transferred and saved /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/kicbase/echo-server_functional-135358 to cache
	I0721 23:40:24.711033   22659 cache_images.go:365] Successfully saved all cached images
	I0721 23:40:24.711039   22659 cache_images.go:351] duration metric: took 4.503905247s to SaveCachedImages
	I0721 23:40:24.711047   22659 cache_images.go:456] succeeded pulling from : functional-135358
	I0721 23:40:24.711052   22659 cache_images.go:457] failed pulling from : 
	I0721 23:40:24.711078   22659 main.go:141] libmachine: Making call to close driver server
	I0721 23:40:24.711094   22659 main.go:141] libmachine: (functional-135358) Calling .Close
	I0721 23:40:24.711365   22659 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
	I0721 23:40:24.711413   22659 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:40:24.711425   22659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:40:24.711439   22659 main.go:141] libmachine: Making call to close driver server
	I0721 23:40:24.711452   22659 main.go:141] libmachine: (functional-135358) Calling .Close
	I0721 23:40:24.711688   22659 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
	I0721 23:40:24.711716   22659 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:40:24.711732   22659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:40:24.713888   22659 out.go:177] 
	W0721 23:40:24.715119   22659 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: tag kicbase/echo-server:functional-135358 not found in tarball
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: tag kicbase/echo-server:functional-135358 not found in tarball
	W0721 23:40:24.715132   22659 out.go:239] * 
	* 
	W0721 23:40:24.717011   22659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:40:24.718311   22659 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 node stop m02 -v=7 --alsologtostderr
E0721 23:45:15.654717   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:45:36.135257   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:46:17.095861   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.461041458s)

                                                
                                                
-- stdout --
	* Stopping node "ha-564251-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:45:11.363370   27224 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:45:11.363512   27224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:45:11.363522   27224 out.go:304] Setting ErrFile to fd 2...
	I0721 23:45:11.363528   27224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:45:11.363725   27224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:45:11.363994   27224 mustload.go:65] Loading cluster: ha-564251
	I0721 23:45:11.364407   27224 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:45:11.364429   27224 stop.go:39] StopHost: ha-564251-m02
	I0721 23:45:11.364778   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:45:11.364819   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:45:11.380590   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0721 23:45:11.381137   27224 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:45:11.381644   27224 main.go:141] libmachine: Using API Version  1
	I0721 23:45:11.381668   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:45:11.381986   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:45:11.384366   27224 out.go:177] * Stopping node "ha-564251-m02"  ...
	I0721 23:45:11.385574   27224 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0721 23:45:11.385607   27224 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:45:11.385809   27224 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0721 23:45:11.385830   27224 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:45:11.388581   27224 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:45:11.389008   27224 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:45:11.389032   27224 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:45:11.389165   27224 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:45:11.389311   27224 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:45:11.389458   27224 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:45:11.389585   27224 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:45:11.478269   27224 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0721 23:45:11.532221   27224 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0721 23:45:11.586806   27224 main.go:141] libmachine: Stopping "ha-564251-m02"...
	I0721 23:45:11.586846   27224 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:45:11.588308   27224 main.go:141] libmachine: (ha-564251-m02) Calling .Stop
	I0721 23:45:11.591822   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 0/120
	I0721 23:45:12.593548   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 1/120
	I0721 23:45:13.595354   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 2/120
	I0721 23:45:14.597080   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 3/120
	I0721 23:45:15.598356   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 4/120
	I0721 23:45:16.600203   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 5/120
	I0721 23:45:17.602272   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 6/120
	I0721 23:45:18.603503   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 7/120
	I0721 23:45:19.604867   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 8/120
	I0721 23:45:20.606150   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 9/120
	I0721 23:45:21.608481   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 10/120
	I0721 23:45:22.609758   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 11/120
	I0721 23:45:23.611634   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 12/120
	I0721 23:45:24.613535   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 13/120
	I0721 23:45:25.615154   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 14/120
	I0721 23:45:26.616907   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 15/120
	I0721 23:45:27.617952   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 16/120
	I0721 23:45:28.619236   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 17/120
	I0721 23:45:29.620928   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 18/120
	I0721 23:45:30.622775   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 19/120
	I0721 23:45:31.624916   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 20/120
	I0721 23:45:32.626481   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 21/120
	I0721 23:45:33.627851   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 22/120
	I0721 23:45:34.630076   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 23/120
	I0721 23:45:35.631545   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 24/120
	I0721 23:45:36.633457   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 25/120
	I0721 23:45:37.635752   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 26/120
	I0721 23:45:38.637353   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 27/120
	I0721 23:45:39.638552   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 28/120
	I0721 23:45:40.640070   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 29/120
	I0721 23:45:41.642079   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 30/120
	I0721 23:45:42.643495   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 31/120
	I0721 23:45:43.645407   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 32/120
	I0721 23:45:44.647144   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 33/120
	I0721 23:45:45.649140   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 34/120
	I0721 23:45:46.651047   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 35/120
	I0721 23:45:47.652960   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 36/120
	I0721 23:45:48.654701   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 37/120
	I0721 23:45:49.656264   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 38/120
	I0721 23:45:50.657563   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 39/120
	I0721 23:45:51.659025   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 40/120
	I0721 23:45:52.660247   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 41/120
	I0721 23:45:53.661597   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 42/120
	I0721 23:45:54.663003   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 43/120
	I0721 23:45:55.665257   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 44/120
	I0721 23:45:56.666575   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 45/120
	I0721 23:45:57.668416   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 46/120
	I0721 23:45:58.669768   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 47/120
	I0721 23:45:59.671123   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 48/120
	I0721 23:46:00.672244   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 49/120
	I0721 23:46:01.674162   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 50/120
	I0721 23:46:02.675433   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 51/120
	I0721 23:46:03.676624   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 52/120
	I0721 23:46:04.678262   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 53/120
	I0721 23:46:05.679452   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 54/120
	I0721 23:46:06.681359   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 55/120
	I0721 23:46:07.682658   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 56/120
	I0721 23:46:08.683818   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 57/120
	I0721 23:46:09.685142   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 58/120
	I0721 23:46:10.687271   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 59/120
	I0721 23:46:11.689305   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 60/120
	I0721 23:46:12.690571   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 61/120
	I0721 23:46:13.691854   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 62/120
	I0721 23:46:14.693429   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 63/120
	I0721 23:46:15.694845   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 64/120
	I0721 23:46:16.696323   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 65/120
	I0721 23:46:17.697704   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 66/120
	I0721 23:46:18.698982   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 67/120
	I0721 23:46:19.701167   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 68/120
	I0721 23:46:20.702843   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 69/120
	I0721 23:46:21.704196   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 70/120
	I0721 23:46:22.705494   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 71/120
	I0721 23:46:23.707043   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 72/120
	I0721 23:46:24.709277   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 73/120
	I0721 23:46:25.710666   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 74/120
	I0721 23:46:26.712772   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 75/120
	I0721 23:46:27.714071   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 76/120
	I0721 23:46:28.715436   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 77/120
	I0721 23:46:29.717130   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 78/120
	I0721 23:46:30.718491   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 79/120
	I0721 23:46:31.720870   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 80/120
	I0721 23:46:32.722353   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 81/120
	I0721 23:46:33.723797   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 82/120
	I0721 23:46:34.725651   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 83/120
	I0721 23:46:35.727048   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 84/120
	I0721 23:46:36.729014   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 85/120
	I0721 23:46:37.730430   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 86/120
	I0721 23:46:38.731703   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 87/120
	I0721 23:46:39.732944   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 88/120
	I0721 23:46:40.734177   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 89/120
	I0721 23:46:41.736196   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 90/120
	I0721 23:46:42.738022   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 91/120
	I0721 23:46:43.739384   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 92/120
	I0721 23:46:44.740697   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 93/120
	I0721 23:46:45.743052   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 94/120
	I0721 23:46:46.745040   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 95/120
	I0721 23:46:47.746498   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 96/120
	I0721 23:46:48.747896   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 97/120
	I0721 23:46:49.749224   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 98/120
	I0721 23:46:50.750814   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 99/120
	I0721 23:46:51.752805   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 100/120
	I0721 23:46:52.754641   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 101/120
	I0721 23:46:53.755987   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 102/120
	I0721 23:46:54.757588   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 103/120
	I0721 23:46:55.759057   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 104/120
	I0721 23:46:56.760944   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 105/120
	I0721 23:46:57.762332   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 106/120
	I0721 23:46:58.764599   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 107/120
	I0721 23:46:59.765895   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 108/120
	I0721 23:47:00.767114   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 109/120
	I0721 23:47:01.768694   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 110/120
	I0721 23:47:02.770205   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 111/120
	I0721 23:47:03.771513   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 112/120
	I0721 23:47:04.772819   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 113/120
	I0721 23:47:05.774365   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 114/120
	I0721 23:47:06.776271   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 115/120
	I0721 23:47:07.777493   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 116/120
	I0721 23:47:08.778829   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 117/120
	I0721 23:47:09.781114   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 118/120
	I0721 23:47:10.783364   27224 main.go:141] libmachine: (ha-564251-m02) Waiting for machine to stop 119/120
	I0721 23:47:11.784783   27224 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0721 23:47:11.785013   27224 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-564251 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (19.156496787s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:47:11.828994   27639 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:47:11.829118   27639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:11.829126   27639 out.go:304] Setting ErrFile to fd 2...
	I0721 23:47:11.829130   27639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:11.829342   27639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:47:11.829552   27639 out.go:298] Setting JSON to false
	I0721 23:47:11.829587   27639 mustload.go:65] Loading cluster: ha-564251
	I0721 23:47:11.829698   27639 notify.go:220] Checking for updates...
	I0721 23:47:11.829971   27639 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:47:11.829987   27639 status.go:255] checking status of ha-564251 ...
	I0721 23:47:11.830440   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:11.830505   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:11.848129   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0721 23:47:11.848596   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:11.849283   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:11.849321   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:11.849711   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:11.849900   27639 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:47:11.851448   27639 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:47:11.851464   27639 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:11.851724   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:11.851758   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:11.865844   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0721 23:47:11.866209   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:11.866663   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:11.866689   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:11.867015   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:11.867190   27639 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:47:11.869787   27639 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:11.870224   27639 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:11.870261   27639 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:11.870355   27639 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:11.870660   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:11.870694   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:11.885574   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0721 23:47:11.885908   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:11.886346   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:11.886369   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:11.886649   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:11.886823   27639 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:47:11.887044   27639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:11.887091   27639 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:47:11.889794   27639 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:11.890153   27639 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:11.890179   27639 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:11.890277   27639 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:47:11.890437   27639 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:47:11.890577   27639 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:47:11.890752   27639 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:47:11.971152   27639 ssh_runner.go:195] Run: systemctl --version
	I0721 23:47:11.977780   27639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:11.993412   27639 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:11.993439   27639 api_server.go:166] Checking apiserver status ...
	I0721 23:47:11.993473   27639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:12.009281   27639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:47:12.018035   27639 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:12.018084   27639 ssh_runner.go:195] Run: ls
	I0721 23:47:12.024578   27639 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:12.030434   27639 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:12.030455   27639 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:47:12.030464   27639 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:12.030479   27639 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:47:12.030794   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:12.030833   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:12.046019   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0721 23:47:12.046438   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:12.046930   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:12.046951   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:12.047291   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:12.047463   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:47:12.049043   27639 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:47:12.049060   27639 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:12.049329   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:12.049364   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:12.065666   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0721 23:47:12.066078   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:12.066562   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:12.066588   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:12.066877   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:12.067110   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:47:12.070276   27639 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:12.070718   27639 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:12.070746   27639 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:12.070882   27639 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:12.071300   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:12.071349   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:12.086147   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0721 23:47:12.086529   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:12.086969   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:12.086987   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:12.087306   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:12.087465   27639 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:47:12.087657   27639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:12.087677   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:47:12.090054   27639 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:12.090412   27639 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:12.090437   27639 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:12.090627   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:47:12.090788   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:47:12.090940   27639 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:47:12.091073   27639 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:47:30.590805   27639 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:30.590895   27639 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:47:30.590909   27639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:30.590918   27639 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:47:30.590937   27639 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:30.590944   27639 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:47:30.591252   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.591306   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.607080   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0721 23:47:30.607534   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.608114   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.608139   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.608457   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.608676   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:47:30.610292   27639 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:47:30.610315   27639 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:30.610629   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.610682   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.625839   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0721 23:47:30.626264   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.626748   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.626771   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.627107   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.627308   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:47:30.630036   27639 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:30.630431   27639 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:30.630458   27639 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:30.630574   27639 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:30.630965   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.631005   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.645944   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I0721 23:47:30.646283   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.646855   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.646875   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.647211   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.647399   27639 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:47:30.647607   27639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:30.647631   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:47:30.649968   27639 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:30.650389   27639 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:30.650405   27639 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:30.650590   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:47:30.650750   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:47:30.650868   27639 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:47:30.651028   27639 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:47:30.735064   27639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:30.751642   27639 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:30.751676   27639 api_server.go:166] Checking apiserver status ...
	I0721 23:47:30.751717   27639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:30.767011   27639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:47:30.776193   27639 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:30.776248   27639 ssh_runner.go:195] Run: ls
	I0721 23:47:30.780884   27639 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:30.786870   27639 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:30.786891   27639 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:47:30.786898   27639 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:30.786911   27639 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:47:30.787222   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.787263   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.802136   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I0721 23:47:30.802476   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.802935   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.802957   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.803228   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.803413   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:47:30.804965   27639 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:47:30.804982   27639 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:30.805253   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.805289   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.820920   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
	I0721 23:47:30.821280   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.821726   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.821747   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.822055   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.822261   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:47:30.824899   27639 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:30.825303   27639 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:30.825336   27639 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:30.825474   27639 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:30.825785   27639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:30.825843   27639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:30.839949   27639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0721 23:47:30.840280   27639 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:30.840701   27639 main.go:141] libmachine: Using API Version  1
	I0721 23:47:30.840721   27639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:30.841021   27639 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:30.841215   27639 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:47:30.841419   27639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:30.841456   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:47:30.844420   27639 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:30.844765   27639 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:30.844800   27639 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:30.844890   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:47:30.845071   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:47:30.845214   27639 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:47:30.845359   27639 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:47:30.923303   27639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:30.940731   27639 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-564251 -n ha-564251
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-564251 logs -n 25: (1.358294702s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m03_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m04 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp testdata/cp-test.txt                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m04_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03:/home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m03 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-564251 node stop m02 -v=7                                                     | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:40:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:40:40.546278   23196 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:40.546413   23196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:40.546425   23196 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:40.546431   23196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:40.546636   23196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:40.547182   23196 out.go:298] Setting JSON to false
	I0721 23:40:40.548067   23196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1385,"bootTime":1721603856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:40:40.548125   23196 start.go:139] virtualization: kvm guest
	I0721 23:40:40.550458   23196 out.go:177] * [ha-564251] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:40:40.551991   23196 notify.go:220] Checking for updates...
	I0721 23:40:40.552011   23196 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:40:40.553311   23196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:40:40.554713   23196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:40:40.556029   23196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:40.557257   23196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:40:40.558476   23196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:40:40.559903   23196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:40:40.593913   23196 out.go:177] * Using the kvm2 driver based on user configuration
	I0721 23:40:40.595060   23196 start.go:297] selected driver: kvm2
	I0721 23:40:40.595084   23196 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:40:40.595095   23196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:40:40.595784   23196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:40:40.595846   23196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:40:40.610241   23196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:40:40.610301   23196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:40:40.610514   23196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:40:40.610541   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:40:40.610547   23196 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0721 23:40:40.610559   23196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0721 23:40:40.610663   23196 start.go:340] cluster config:
	{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0721 23:40:40.610753   23196 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:40:40.612886   23196 out.go:177] * Starting "ha-564251" primary control-plane node in "ha-564251" cluster
	I0721 23:40:40.613918   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:40:40.613953   23196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:40:40.613962   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:40:40.614031   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:40:40.614045   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:40:40.614355   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:40:40.614381   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json: {Name:mk5a28a63630db995c66c5ccfa02b795741655f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:40:40.614514   23196 start.go:360] acquireMachinesLock for ha-564251: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:40:40.614567   23196 start.go:364] duration metric: took 28.82µs to acquireMachinesLock for "ha-564251"
	I0721 23:40:40.614590   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:40:40.614689   23196 start.go:125] createHost starting for "" (driver="kvm2")
	I0721 23:40:40.616125   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:40:40.616273   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:40.616314   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:40.629715   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0721 23:40:40.630093   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:40.630676   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:40:40.630696   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:40.631015   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:40.631203   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:40:40.631366   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:40.631515   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:40:40.631542   23196 client.go:168] LocalClient.Create starting
	I0721 23:40:40.631579   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:40:40.631619   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:40:40.631637   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:40:40.631704   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:40:40.631727   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:40:40.631746   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:40:40.631776   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:40:40.631787   23196 main.go:141] libmachine: (ha-564251) Calling .PreCreateCheck
	I0721 23:40:40.632105   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:40:40.632476   23196 main.go:141] libmachine: Creating machine...
	I0721 23:40:40.632491   23196 main.go:141] libmachine: (ha-564251) Calling .Create
	I0721 23:40:40.632600   23196 main.go:141] libmachine: (ha-564251) Creating KVM machine...
	I0721 23:40:40.633705   23196 main.go:141] libmachine: (ha-564251) DBG | found existing default KVM network
	I0721 23:40:40.634328   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.634206   23219 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0721 23:40:40.634379   23196 main.go:141] libmachine: (ha-564251) DBG | created network xml: 
	I0721 23:40:40.634396   23196 main.go:141] libmachine: (ha-564251) DBG | <network>
	I0721 23:40:40.634411   23196 main.go:141] libmachine: (ha-564251) DBG |   <name>mk-ha-564251</name>
	I0721 23:40:40.634417   23196 main.go:141] libmachine: (ha-564251) DBG |   <dns enable='no'/>
	I0721 23:40:40.634424   23196 main.go:141] libmachine: (ha-564251) DBG |   
	I0721 23:40:40.634431   23196 main.go:141] libmachine: (ha-564251) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0721 23:40:40.634442   23196 main.go:141] libmachine: (ha-564251) DBG |     <dhcp>
	I0721 23:40:40.634452   23196 main.go:141] libmachine: (ha-564251) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0721 23:40:40.634461   23196 main.go:141] libmachine: (ha-564251) DBG |     </dhcp>
	I0721 23:40:40.634474   23196 main.go:141] libmachine: (ha-564251) DBG |   </ip>
	I0721 23:40:40.634484   23196 main.go:141] libmachine: (ha-564251) DBG |   
	I0721 23:40:40.634495   23196 main.go:141] libmachine: (ha-564251) DBG | </network>
	I0721 23:40:40.634515   23196 main.go:141] libmachine: (ha-564251) DBG | 
	I0721 23:40:40.639387   23196 main.go:141] libmachine: (ha-564251) DBG | trying to create private KVM network mk-ha-564251 192.168.39.0/24...
	I0721 23:40:40.701034   23196 main.go:141] libmachine: (ha-564251) DBG | private KVM network mk-ha-564251 192.168.39.0/24 created
	I0721 23:40:40.701077   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.700975   23219 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:40.701091   23196 main.go:141] libmachine: (ha-564251) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 ...
	I0721 23:40:40.701111   23196 main.go:141] libmachine: (ha-564251) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:40:40.701138   23196 main.go:141] libmachine: (ha-564251) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:40:40.947585   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.947443   23219 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa...
	I0721 23:40:41.145755   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:41.145633   23219 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/ha-564251.rawdisk...
	I0721 23:40:41.145788   23196 main.go:141] libmachine: (ha-564251) DBG | Writing magic tar header
	I0721 23:40:41.145800   23196 main.go:141] libmachine: (ha-564251) DBG | Writing SSH key tar header
	I0721 23:40:41.145807   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:41.145755   23219 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 ...
	I0721 23:40:41.145887   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251
	I0721 23:40:41.145903   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:40:41.145915   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 (perms=drwx------)
	I0721 23:40:41.145943   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:40:41.145951   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:41.145957   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:40:41.145974   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:40:41.145990   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:40:41.146003   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:40:41.146017   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:40:41.146025   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:40:41.146031   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:40:41.146039   23196 main.go:141] libmachine: (ha-564251) Creating domain...
	I0721 23:40:41.146050   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home
	I0721 23:40:41.146058   23196 main.go:141] libmachine: (ha-564251) DBG | Skipping /home - not owner
	I0721 23:40:41.147119   23196 main.go:141] libmachine: (ha-564251) define libvirt domain using xml: 
	I0721 23:40:41.147144   23196 main.go:141] libmachine: (ha-564251) <domain type='kvm'>
	I0721 23:40:41.147155   23196 main.go:141] libmachine: (ha-564251)   <name>ha-564251</name>
	I0721 23:40:41.147172   23196 main.go:141] libmachine: (ha-564251)   <memory unit='MiB'>2200</memory>
	I0721 23:40:41.147184   23196 main.go:141] libmachine: (ha-564251)   <vcpu>2</vcpu>
	I0721 23:40:41.147192   23196 main.go:141] libmachine: (ha-564251)   <features>
	I0721 23:40:41.147202   23196 main.go:141] libmachine: (ha-564251)     <acpi/>
	I0721 23:40:41.147213   23196 main.go:141] libmachine: (ha-564251)     <apic/>
	I0721 23:40:41.147225   23196 main.go:141] libmachine: (ha-564251)     <pae/>
	I0721 23:40:41.147236   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147248   23196 main.go:141] libmachine: (ha-564251)   </features>
	I0721 23:40:41.147263   23196 main.go:141] libmachine: (ha-564251)   <cpu mode='host-passthrough'>
	I0721 23:40:41.147274   23196 main.go:141] libmachine: (ha-564251)   
	I0721 23:40:41.147281   23196 main.go:141] libmachine: (ha-564251)   </cpu>
	I0721 23:40:41.147293   23196 main.go:141] libmachine: (ha-564251)   <os>
	I0721 23:40:41.147303   23196 main.go:141] libmachine: (ha-564251)     <type>hvm</type>
	I0721 23:40:41.147315   23196 main.go:141] libmachine: (ha-564251)     <boot dev='cdrom'/>
	I0721 23:40:41.147343   23196 main.go:141] libmachine: (ha-564251)     <boot dev='hd'/>
	I0721 23:40:41.147354   23196 main.go:141] libmachine: (ha-564251)     <bootmenu enable='no'/>
	I0721 23:40:41.147363   23196 main.go:141] libmachine: (ha-564251)   </os>
	I0721 23:40:41.147370   23196 main.go:141] libmachine: (ha-564251)   <devices>
	I0721 23:40:41.147380   23196 main.go:141] libmachine: (ha-564251)     <disk type='file' device='cdrom'>
	I0721 23:40:41.147395   23196 main.go:141] libmachine: (ha-564251)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/boot2docker.iso'/>
	I0721 23:40:41.147408   23196 main.go:141] libmachine: (ha-564251)       <target dev='hdc' bus='scsi'/>
	I0721 23:40:41.147418   23196 main.go:141] libmachine: (ha-564251)       <readonly/>
	I0721 23:40:41.147429   23196 main.go:141] libmachine: (ha-564251)     </disk>
	I0721 23:40:41.147441   23196 main.go:141] libmachine: (ha-564251)     <disk type='file' device='disk'>
	I0721 23:40:41.147453   23196 main.go:141] libmachine: (ha-564251)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:40:41.147469   23196 main.go:141] libmachine: (ha-564251)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/ha-564251.rawdisk'/>
	I0721 23:40:41.147480   23196 main.go:141] libmachine: (ha-564251)       <target dev='hda' bus='virtio'/>
	I0721 23:40:41.147491   23196 main.go:141] libmachine: (ha-564251)     </disk>
	I0721 23:40:41.147505   23196 main.go:141] libmachine: (ha-564251)     <interface type='network'>
	I0721 23:40:41.147525   23196 main.go:141] libmachine: (ha-564251)       <source network='mk-ha-564251'/>
	I0721 23:40:41.147536   23196 main.go:141] libmachine: (ha-564251)       <model type='virtio'/>
	I0721 23:40:41.147546   23196 main.go:141] libmachine: (ha-564251)     </interface>
	I0721 23:40:41.147568   23196 main.go:141] libmachine: (ha-564251)     <interface type='network'>
	I0721 23:40:41.147581   23196 main.go:141] libmachine: (ha-564251)       <source network='default'/>
	I0721 23:40:41.147588   23196 main.go:141] libmachine: (ha-564251)       <model type='virtio'/>
	I0721 23:40:41.147606   23196 main.go:141] libmachine: (ha-564251)     </interface>
	I0721 23:40:41.147616   23196 main.go:141] libmachine: (ha-564251)     <serial type='pty'>
	I0721 23:40:41.147645   23196 main.go:141] libmachine: (ha-564251)       <target port='0'/>
	I0721 23:40:41.147663   23196 main.go:141] libmachine: (ha-564251)     </serial>
	I0721 23:40:41.147669   23196 main.go:141] libmachine: (ha-564251)     <console type='pty'>
	I0721 23:40:41.147677   23196 main.go:141] libmachine: (ha-564251)       <target type='serial' port='0'/>
	I0721 23:40:41.147690   23196 main.go:141] libmachine: (ha-564251)     </console>
	I0721 23:40:41.147698   23196 main.go:141] libmachine: (ha-564251)     <rng model='virtio'>
	I0721 23:40:41.147703   23196 main.go:141] libmachine: (ha-564251)       <backend model='random'>/dev/random</backend>
	I0721 23:40:41.147710   23196 main.go:141] libmachine: (ha-564251)     </rng>
	I0721 23:40:41.147714   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147721   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147730   23196 main.go:141] libmachine: (ha-564251)   </devices>
	I0721 23:40:41.147737   23196 main.go:141] libmachine: (ha-564251) </domain>
	I0721 23:40:41.147741   23196 main.go:141] libmachine: (ha-564251) 
	I0721 23:40:41.152060   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:1b:3f:cc in network default
	I0721 23:40:41.152594   23196 main.go:141] libmachine: (ha-564251) Ensuring networks are active...
	I0721 23:40:41.152616   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:41.153170   23196 main.go:141] libmachine: (ha-564251) Ensuring network default is active
	I0721 23:40:41.153584   23196 main.go:141] libmachine: (ha-564251) Ensuring network mk-ha-564251 is active
	I0721 23:40:41.154236   23196 main.go:141] libmachine: (ha-564251) Getting domain xml...
	I0721 23:40:41.154951   23196 main.go:141] libmachine: (ha-564251) Creating domain...
	I0721 23:40:42.321898   23196 main.go:141] libmachine: (ha-564251) Waiting to get IP...
	I0721 23:40:42.322641   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.323001   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.323045   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.322986   23219 retry.go:31] will retry after 226.990581ms: waiting for machine to come up
	I0721 23:40:42.551449   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.551889   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.551917   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.551843   23219 retry.go:31] will retry after 345.157454ms: waiting for machine to come up
	I0721 23:40:42.898184   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.898667   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.898716   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.898637   23219 retry.go:31] will retry after 450.376972ms: waiting for machine to come up
	I0721 23:40:43.350132   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:43.350532   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:43.350567   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:43.350476   23219 retry.go:31] will retry after 548.229138ms: waiting for machine to come up
	I0721 23:40:43.900112   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:43.900526   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:43.900558   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:43.900490   23219 retry.go:31] will retry after 742.775493ms: waiting for machine to come up
	I0721 23:40:44.645071   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:44.645486   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:44.645513   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:44.645434   23219 retry.go:31] will retry after 784.586324ms: waiting for machine to come up
	I0721 23:40:45.431400   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:45.431765   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:45.431801   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:45.431727   23219 retry.go:31] will retry after 1.075109633s: waiting for machine to come up
	I0721 23:40:46.508612   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:46.509010   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:46.509035   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:46.508968   23219 retry.go:31] will retry after 1.2901904s: waiting for machine to come up
	I0721 23:40:47.801398   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:47.801883   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:47.801911   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:47.801825   23219 retry.go:31] will retry after 1.682137152s: waiting for machine to come up
	I0721 23:40:49.486662   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:49.487036   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:49.487066   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:49.486988   23219 retry.go:31] will retry after 1.799508967s: waiting for machine to come up
	I0721 23:40:51.287656   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:51.288059   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:51.288085   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:51.288008   23219 retry.go:31] will retry after 2.604882291s: waiting for machine to come up
	I0721 23:40:53.895574   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:53.895902   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:53.895921   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:53.895875   23219 retry.go:31] will retry after 2.265187217s: waiting for machine to come up
	I0721 23:40:56.162821   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:56.163266   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:56.163291   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:56.163221   23219 retry.go:31] will retry after 3.572604507s: waiting for machine to come up
	I0721 23:40:59.739716   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.740066   23196 main.go:141] libmachine: (ha-564251) Found IP for machine: 192.168.39.91
	I0721 23:40:59.740097   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has current primary IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.740106   23196 main.go:141] libmachine: (ha-564251) Reserving static IP address...
	I0721 23:40:59.740418   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find host DHCP lease matching {name: "ha-564251", mac: "52:54:00:92:9e:c7", ip: "192.168.39.91"} in network mk-ha-564251
	I0721 23:40:59.809966   23196 main.go:141] libmachine: (ha-564251) DBG | Getting to WaitForSSH function...
	I0721 23:40:59.809998   23196 main.go:141] libmachine: (ha-564251) Reserved static IP address: 192.168.39.91
	I0721 23:40:59.810012   23196 main.go:141] libmachine: (ha-564251) Waiting for SSH to be available...
	I0721 23:40:59.812265   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.812627   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:9e:c7}
	I0721 23:40:59.812652   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.812790   23196 main.go:141] libmachine: (ha-564251) DBG | Using SSH client type: external
	I0721 23:40:59.812811   23196 main.go:141] libmachine: (ha-564251) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa (-rw-------)
	I0721 23:40:59.812828   23196 main.go:141] libmachine: (ha-564251) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:40:59.812833   23196 main.go:141] libmachine: (ha-564251) DBG | About to run SSH command:
	I0721 23:40:59.812841   23196 main.go:141] libmachine: (ha-564251) DBG | exit 0
	I0721 23:40:59.930266   23196 main.go:141] libmachine: (ha-564251) DBG | SSH cmd err, output: <nil>: 
	I0721 23:40:59.930538   23196 main.go:141] libmachine: (ha-564251) KVM machine creation complete!
	I0721 23:40:59.930835   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:40:59.931422   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:59.931615   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:59.931782   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:40:59.931820   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:40:59.933150   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:40:59.933163   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:40:59.933168   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:40:59.933174   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:40:59.935350   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.935655   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:40:59.935689   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.935824   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:40:59.935986   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:40:59.936138   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:40:59.936267   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:40:59.936438   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:40:59.936715   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:40:59.936735   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:41:00.033692   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:00.033716   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:41:00.033726   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.036753   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.037113   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.037131   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.037281   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.037582   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.037816   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.037975   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.038123   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.038281   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.038291   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:41:00.134971   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:41:00.135071   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:41:00.135109   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:41:00.135123   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.135381   23196 buildroot.go:166] provisioning hostname "ha-564251"
	I0721 23:41:00.135410   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.135584   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.137805   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.138153   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.138178   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.138331   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.138496   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.138671   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.138815   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.138980   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.139142   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.139152   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251 && echo "ha-564251" | sudo tee /etc/hostname
	I0721 23:41:00.247562   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:41:00.247593   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.250032   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.250427   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.250456   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.250699   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.250867   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.251037   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.251221   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.251397   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.251588   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.251604   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:41:00.354410   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:00.354435   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:41:00.354462   23196 buildroot.go:174] setting up certificates
	I0721 23:41:00.354472   23196 provision.go:84] configureAuth start
	I0721 23:41:00.354480   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.354804   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:00.357273   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.357634   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.357661   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.357806   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.359631   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.359886   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.359913   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.359992   23196 provision.go:143] copyHostCerts
	I0721 23:41:00.360055   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:00.360099   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:41:00.360116   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:00.360196   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:41:00.360292   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:00.360316   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:41:00.360324   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:00.360360   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:41:00.360460   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:00.360489   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:41:00.360498   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:00.360530   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:41:00.360593   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251 san=[127.0.0.1 192.168.39.91 ha-564251 localhost minikube]
	I0721 23:41:00.448962   23196 provision.go:177] copyRemoteCerts
	I0721 23:41:00.449011   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:41:00.449031   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.451527   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.451855   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.451890   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.452006   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.452202   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.452366   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.452506   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:00.528321   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:41:00.528414   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:41:00.551499   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:41:00.551569   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:41:00.573075   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:41:00.573127   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0721 23:41:00.594064   23196 provision.go:87] duration metric: took 239.579894ms to configureAuth
	I0721 23:41:00.594094   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:41:00.594255   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:00.594334   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.596669   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.596983   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.597008   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.597156   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.597365   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.597515   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.597690   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.597863   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.598012   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.598028   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:41:00.851630   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:41:00.851659   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:41:00.851667   23196 main.go:141] libmachine: (ha-564251) Calling .GetURL
	I0721 23:41:00.852807   23196 main.go:141] libmachine: (ha-564251) DBG | Using libvirt version 6000000
	I0721 23:41:00.854810   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.855075   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.855099   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.855246   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:41:00.855259   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:41:00.855268   23196 client.go:171] duration metric: took 20.223716322s to LocalClient.Create
	I0721 23:41:00.855293   23196 start.go:167] duration metric: took 20.223778038s to libmachine.API.Create "ha-564251"
	I0721 23:41:00.855305   23196 start.go:293] postStartSetup for "ha-564251" (driver="kvm2")
	I0721 23:41:00.855318   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:41:00.855339   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:00.855542   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:41:00.855563   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.857342   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.857731   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.857749   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.857896   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.858145   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.858289   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.858455   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:00.936595   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:41:00.940663   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:41:00.940681   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:41:00.940740   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:41:00.940808   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:41:00.940817   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:41:00.940906   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:41:00.950096   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:00.976055   23196 start.go:296] duration metric: took 120.738688ms for postStartSetup
	I0721 23:41:00.976098   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:41:00.976700   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:00.979268   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.979603   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.979618   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.979846   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:00.980000   23196 start.go:128] duration metric: took 20.365301805s to createHost
	I0721 23:41:00.980018   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.982201   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.982498   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.982541   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.982655   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.982885   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.983071   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.983240   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.983473   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.983649   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.983662   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:41:01.078829   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605261.053137130
	
	I0721 23:41:01.078846   23196 fix.go:216] guest clock: 1721605261.053137130
	I0721 23:41:01.078862   23196 fix.go:229] Guest: 2024-07-21 23:41:01.05313713 +0000 UTC Remote: 2024-07-21 23:41:00.980009736 +0000 UTC m=+20.466637872 (delta=73.127394ms)
	I0721 23:41:01.078890   23196 fix.go:200] guest clock delta is within tolerance: 73.127394ms
	I0721 23:41:01.078895   23196 start.go:83] releasing machines lock for "ha-564251", held for 20.46431804s
	I0721 23:41:01.078911   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.079173   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:01.081997   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.082367   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.082391   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.082540   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083066   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083240   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083343   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:41:01.083392   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:01.083457   23196 ssh_runner.go:195] Run: cat /version.json
	I0721 23:41:01.083482   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:01.085717   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086033   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.086070   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086089   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086205   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:01.086378   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:01.086496   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.086521   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086632   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:01.086689   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:01.086821   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:01.086819   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:01.086954   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:01.087174   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:01.187788   23196 ssh_runner.go:195] Run: systemctl --version
	I0721 23:41:01.193357   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:41:01.345767   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:41:01.351542   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:41:01.351601   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:41:01.365775   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:41:01.365792   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:41:01.365842   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:41:01.380850   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:41:01.393445   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:41:01.393503   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:41:01.405644   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:41:01.418583   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:41:01.526640   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:41:01.658590   23196 docker.go:233] disabling docker service ...
	I0721 23:41:01.658658   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:41:01.679251   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:41:01.691467   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:41:01.824984   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:41:01.953360   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:41:01.966263   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:41:01.982934   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:41:01.983004   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:01.992477   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:41:01.992553   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.002358   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.011880   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.021371   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:41:02.031204   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.041031   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.056975   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.066514   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:41:02.075217   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:41:02.075276   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:41:02.086572   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:41:02.095451   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:02.225576   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:41:02.354323   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:41:02.354402   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:41:02.358757   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:41:02.358801   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:41:02.362040   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:41:02.399992   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:41:02.400072   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:02.427409   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:02.456411   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:41:02.457787   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:02.460589   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:02.460935   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:02.460962   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:02.461140   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:41:02.465058   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:02.477327   23196 kubeadm.go:883] updating cluster {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:41:02.477427   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:41:02.477467   23196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:41:02.508153   23196 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0721 23:41:02.508222   23196 ssh_runner.go:195] Run: which lz4
	I0721 23:41:02.511743   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0721 23:41:02.511843   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 23:41:02.515551   23196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 23:41:02.515580   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0721 23:41:03.710943   23196 crio.go:462] duration metric: took 1.199137138s to copy over tarball
	I0721 23:41:03.711017   23196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 23:41:05.793655   23196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082616359s)
	I0721 23:41:05.793680   23196 crio.go:469] duration metric: took 2.082708301s to extract the tarball
	I0721 23:41:05.793687   23196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 23:41:05.831124   23196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:41:05.872861   23196 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:41:05.872879   23196 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:41:05.872887   23196 kubeadm.go:934] updating node { 192.168.39.91 8443 v1.30.3 crio true true} ...
	I0721 23:41:05.873014   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:41:05.873090   23196 ssh_runner.go:195] Run: crio config
	I0721 23:41:05.913664   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:41:05.913683   23196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 23:41:05.913692   23196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:41:05.913717   23196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-564251 NodeName:ha-564251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:41:05.913875   23196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-564251"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:41:05.913903   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:41:05.913944   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:41:05.932034   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:41:05.932159   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:41:05.932216   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:05.941481   23196 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:41:05.941530   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0721 23:41:05.950214   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0721 23:41:05.967032   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:41:05.982874   23196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0721 23:41:05.997480   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0721 23:41:06.012067   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:41:06.015784   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:06.027237   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:06.142381   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:41:06.159549   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.91
	I0721 23:41:06.159567   23196 certs.go:194] generating shared ca certs ...
	I0721 23:41:06.159582   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.159731   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:41:06.159769   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:41:06.159778   23196 certs.go:256] generating profile certs ...
	I0721 23:41:06.159835   23196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:41:06.159855   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt with IP's: []
	I0721 23:41:06.368527   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt ...
	I0721 23:41:06.368556   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt: {Name:mk4fd652ead42f577c5596c2cceaf3cd9cc210ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.368714   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key ...
	I0721 23:41:06.368724   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key: {Name:mkb22d50d215d5e147d7bc98131bf78c78b3ffb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.368800   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb
	I0721 23:41:06.368814   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.254]
	I0721 23:41:06.571331   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb ...
	I0721 23:41:06.571360   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb: {Name:mk17d073f9fd70c9cc64a6ed93f552a2be0a4d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.571514   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb ...
	I0721 23:41:06.571526   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb: {Name:mk769c41017d78a39c6d3d1328ad259c5de648a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.571591   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:41:06.571671   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:41:06.571725   23196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:41:06.571740   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt with IP's: []
	I0721 23:41:06.759255   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt ...
	I0721 23:41:06.759280   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt: {Name:mk94f17fb27624bf2677b9a0c6710678fdcfe163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.759426   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key ...
	I0721 23:41:06.759437   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key: {Name:mk36259a9d79f8aa2c13c70a83696bd241408831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.759500   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:41:06.759512   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:41:06.759527   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:41:06.759563   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:41:06.759581   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:41:06.759592   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:41:06.759602   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:41:06.759613   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:41:06.759657   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:41:06.759690   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:41:06.759699   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:41:06.759722   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:41:06.759747   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:41:06.759767   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:41:06.759802   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:06.759831   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:41:06.759845   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:06.759857   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:41:06.760437   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:41:06.784701   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:41:06.806275   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:41:06.828117   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:41:06.849183   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0721 23:41:06.870264   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:41:06.892346   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:41:06.917113   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:41:06.965862   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:41:06.992952   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:41:07.013436   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:41:07.034226   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:41:07.048830   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:41:07.053979   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:41:07.063324   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.067182   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.067223   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.072273   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:41:07.081598   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:41:07.090660   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.094423   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.094457   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.099469   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:41:07.108948   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:41:07.118492   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.122330   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.122371   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.127548   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:41:07.137242   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:41:07.140900   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:41:07.140956   23196 kubeadm.go:392] StartCluster: {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:41:07.141049   23196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:41:07.141087   23196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:41:07.175295   23196 cri.go:89] found id: ""
	I0721 23:41:07.175365   23196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 23:41:07.184254   23196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 23:41:07.192907   23196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 23:41:07.201225   23196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 23:41:07.201246   23196 kubeadm.go:157] found existing configuration files:
	
	I0721 23:41:07.201287   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0721 23:41:07.209026   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 23:41:07.209073   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 23:41:07.217354   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0721 23:41:07.225210   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 23:41:07.225260   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 23:41:07.233308   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0721 23:41:07.241082   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 23:41:07.241131   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 23:41:07.249118   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0721 23:41:07.256727   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 23:41:07.256766   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 23:41:07.264848   23196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 23:41:07.482211   23196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 23:41:20.722699   23196 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0721 23:41:20.722753   23196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 23:41:20.722860   23196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 23:41:20.723003   23196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 23:41:20.723134   23196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 23:41:20.723225   23196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 23:41:20.724887   23196 out.go:204]   - Generating certificates and keys ...
	I0721 23:41:20.724966   23196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 23:41:20.725021   23196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 23:41:20.725103   23196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0721 23:41:20.725173   23196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0721 23:41:20.725248   23196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0721 23:41:20.725323   23196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0721 23:41:20.725377   23196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0721 23:41:20.725471   23196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-564251 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I0721 23:41:20.725541   23196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0721 23:41:20.725646   23196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-564251 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I0721 23:41:20.725705   23196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0721 23:41:20.725761   23196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0721 23:41:20.725799   23196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0721 23:41:20.725853   23196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 23:41:20.725924   23196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 23:41:20.726003   23196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0721 23:41:20.726081   23196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 23:41:20.726136   23196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 23:41:20.726182   23196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 23:41:20.726246   23196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 23:41:20.726344   23196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 23:41:20.727838   23196 out.go:204]   - Booting up control plane ...
	I0721 23:41:20.727929   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 23:41:20.728019   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 23:41:20.728103   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 23:41:20.728250   23196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 23:41:20.728370   23196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 23:41:20.728410   23196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 23:41:20.728529   23196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0721 23:41:20.728606   23196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0721 23:41:20.728660   23196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00213497s
	I0721 23:41:20.728750   23196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0721 23:41:20.728831   23196 kubeadm.go:310] [api-check] The API server is healthy after 8.738902427s
	I0721 23:41:20.728961   23196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 23:41:20.729100   23196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 23:41:20.729368   23196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 23:41:20.729606   23196 kubeadm.go:310] [mark-control-plane] Marking the node ha-564251 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 23:41:20.729695   23196 kubeadm.go:310] [bootstrap-token] Using token: a27g5i.jpb7sxjvb5ai1hxv
	I0721 23:41:20.731146   23196 out.go:204]   - Configuring RBAC rules ...
	I0721 23:41:20.731263   23196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 23:41:20.731354   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 23:41:20.731480   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 23:41:20.731660   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 23:41:20.731814   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 23:41:20.731932   23196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 23:41:20.732084   23196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 23:41:20.732145   23196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 23:41:20.732214   23196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 23:41:20.732223   23196 kubeadm.go:310] 
	I0721 23:41:20.732303   23196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 23:41:20.732312   23196 kubeadm.go:310] 
	I0721 23:41:20.732420   23196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 23:41:20.732431   23196 kubeadm.go:310] 
	I0721 23:41:20.732479   23196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 23:41:20.732555   23196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 23:41:20.732623   23196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 23:41:20.732635   23196 kubeadm.go:310] 
	I0721 23:41:20.732680   23196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 23:41:20.732686   23196 kubeadm.go:310] 
	I0721 23:41:20.732725   23196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 23:41:20.732730   23196 kubeadm.go:310] 
	I0721 23:41:20.732772   23196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 23:41:20.732834   23196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 23:41:20.732890   23196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 23:41:20.732897   23196 kubeadm.go:310] 
	I0721 23:41:20.732984   23196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 23:41:20.733082   23196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 23:41:20.733093   23196 kubeadm.go:310] 
	I0721 23:41:20.733161   23196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a27g5i.jpb7sxjvb5ai1hxv \
	I0721 23:41:20.733246   23196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0721 23:41:20.733265   23196 kubeadm.go:310] 	--control-plane 
	I0721 23:41:20.733271   23196 kubeadm.go:310] 
	I0721 23:41:20.733353   23196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 23:41:20.733363   23196 kubeadm.go:310] 
	I0721 23:41:20.733433   23196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a27g5i.jpb7sxjvb5ai1hxv \
	I0721 23:41:20.733525   23196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0721 23:41:20.733536   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:41:20.733544   23196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 23:41:20.735154   23196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0721 23:41:20.736326   23196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0721 23:41:20.741393   23196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0721 23:41:20.741411   23196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0721 23:41:20.761233   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0721 23:41:21.123036   23196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 23:41:21.123118   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:21.123118   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251 minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=true
	I0721 23:41:21.143343   23196 ops.go:34] apiserver oom_adj: -16
	I0721 23:41:21.275861   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:21.776812   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:22.276729   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:22.776731   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:23.276283   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:23.776558   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:24.276251   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:24.776540   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:25.275977   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:25.776341   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:26.276236   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:26.776231   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:27.276729   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:27.776448   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:28.275886   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:28.776781   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.276896   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.775991   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.863582   23196 kubeadm.go:1113] duration metric: took 8.740521148s to wait for elevateKubeSystemPrivileges
	I0721 23:41:29.863624   23196 kubeadm.go:394] duration metric: took 22.722672686s to StartCluster
	I0721 23:41:29.863643   23196 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:29.863734   23196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:41:29.864422   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:29.864676   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0721 23:41:29.864686   23196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:29.864710   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:41:29.864719   23196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 23:41:29.864789   23196 addons.go:69] Setting storage-provisioner=true in profile "ha-564251"
	I0721 23:41:29.864799   23196 addons.go:69] Setting default-storageclass=true in profile "ha-564251"
	I0721 23:41:29.864818   23196 addons.go:234] Setting addon storage-provisioner=true in "ha-564251"
	I0721 23:41:29.864836   23196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-564251"
	I0721 23:41:29.864847   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:29.864872   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:29.865305   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.865336   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.865305   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.865409   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.880647   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0721 23:41:29.880990   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0721 23:41:29.881121   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.881487   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.881649   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.881675   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.882032   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.882050   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.882053   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.882355   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.882595   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.882639   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.882658   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.884931   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:41:29.885289   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 23:41:29.885874   23196 cert_rotation.go:137] Starting client certificate rotation controller
	I0721 23:41:29.886108   23196 addons.go:234] Setting addon default-storageclass=true in "ha-564251"
	I0721 23:41:29.886158   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:29.886543   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.886582   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.898096   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0721 23:41:29.898528   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.899072   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.899094   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.899459   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.899650   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.901936   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:29.901985   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0721 23:41:29.902725   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.903198   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.903220   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.903544   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.904041   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.904067   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.904083   23196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 23:41:29.905493   23196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:41:29.905509   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 23:41:29.905528   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:29.908392   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.908744   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:29.908766   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.908907   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:29.909097   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:29.909254   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:29.909416   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:29.918993   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0721 23:41:29.919403   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.919823   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.919840   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.920108   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.920244   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.921577   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:29.921782   23196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 23:41:29.921797   23196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 23:41:29.921813   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:29.924296   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.924628   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:29.924656   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.924813   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:29.924988   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:29.925130   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:29.925315   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:29.980907   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0721 23:41:30.143350   23196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 23:41:30.170523   23196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:41:30.590713   23196 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0721 23:41:30.590799   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.590825   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.591134   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.591163   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.591176   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.591191   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.591203   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.591437   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.591451   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.591452   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.591562   23196 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0721 23:41:30.591572   23196 round_trippers.go:469] Request Headers:
	I0721 23:41:30.591583   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:41:30.591593   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:41:30.605336   23196 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0721 23:41:30.605901   23196 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0721 23:41:30.605917   23196 round_trippers.go:469] Request Headers:
	I0721 23:41:30.605928   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:41:30.605934   23196 round_trippers.go:473]     Content-Type: application/json
	I0721 23:41:30.605939   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:41:30.609173   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:41:30.609317   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.609331   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.609642   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.609671   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.609648   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.790742   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.790765   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.791045   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.791064   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.791074   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.791083   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.791296   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.791313   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.792879   23196 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0721 23:41:30.794066   23196 addons.go:510] duration metric: took 929.343381ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0721 23:41:30.794097   23196 start.go:246] waiting for cluster config update ...
	I0721 23:41:30.794108   23196 start.go:255] writing updated cluster config ...
	I0721 23:41:30.795568   23196 out.go:177] 
	I0721 23:41:30.797219   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:30.797291   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:30.798811   23196 out.go:177] * Starting "ha-564251-m02" control-plane node in "ha-564251" cluster
	I0721 23:41:30.800195   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:41:30.800223   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:41:30.800316   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:41:30.800332   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:41:30.800437   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:30.800654   23196 start.go:360] acquireMachinesLock for ha-564251-m02: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:41:30.800720   23196 start.go:364] duration metric: took 40.272µs to acquireMachinesLock for "ha-564251-m02"
	I0721 23:41:30.800745   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:30.800853   23196 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0721 23:41:30.803086   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:41:30.803186   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:30.803212   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:30.817649   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0721 23:41:30.818109   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:30.818581   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:30.818663   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:30.818994   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:30.819173   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:30.819372   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:30.819533   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:41:30.819557   23196 client.go:168] LocalClient.Create starting
	I0721 23:41:30.819589   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:41:30.819616   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:41:30.819644   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:41:30.819692   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:41:30.819709   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:41:30.819719   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:41:30.819736   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:41:30.819743   23196 main.go:141] libmachine: (ha-564251-m02) Calling .PreCreateCheck
	I0721 23:41:30.819884   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:30.820207   23196 main.go:141] libmachine: Creating machine...
	I0721 23:41:30.820218   23196 main.go:141] libmachine: (ha-564251-m02) Calling .Create
	I0721 23:41:30.820349   23196 main.go:141] libmachine: (ha-564251-m02) Creating KVM machine...
	I0721 23:41:30.821455   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found existing default KVM network
	I0721 23:41:30.821652   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found existing private KVM network mk-ha-564251
	I0721 23:41:30.821778   23196 main.go:141] libmachine: (ha-564251-m02) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 ...
	I0721 23:41:30.821794   23196 main.go:141] libmachine: (ha-564251-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:41:30.821846   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:30.821778   23576 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:41:30.821914   23196 main.go:141] libmachine: (ha-564251-m02) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:41:31.043777   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.043643   23576 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa...
	I0721 23:41:31.084055   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.083910   23576 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/ha-564251-m02.rawdisk...
	I0721 23:41:31.084094   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Writing magic tar header
	I0721 23:41:31.084110   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Writing SSH key tar header
	I0721 23:41:31.084130   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.084055   23576 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 ...
	I0721 23:41:31.084198   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02
	I0721 23:41:31.084239   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:41:31.084254   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 (perms=drwx------)
	I0721 23:41:31.084269   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:41:31.084281   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:41:31.084297   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:41:31.084308   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:41:31.084318   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:41:31.084335   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:41:31.084347   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:41:31.084358   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:41:31.084369   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home
	I0721 23:41:31.084379   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:41:31.084389   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Skipping /home - not owner
	I0721 23:41:31.084397   23196 main.go:141] libmachine: (ha-564251-m02) Creating domain...
	I0721 23:41:31.085259   23196 main.go:141] libmachine: (ha-564251-m02) define libvirt domain using xml: 
	I0721 23:41:31.085278   23196 main.go:141] libmachine: (ha-564251-m02) <domain type='kvm'>
	I0721 23:41:31.085310   23196 main.go:141] libmachine: (ha-564251-m02)   <name>ha-564251-m02</name>
	I0721 23:41:31.085348   23196 main.go:141] libmachine: (ha-564251-m02)   <memory unit='MiB'>2200</memory>
	I0721 23:41:31.085358   23196 main.go:141] libmachine: (ha-564251-m02)   <vcpu>2</vcpu>
	I0721 23:41:31.085367   23196 main.go:141] libmachine: (ha-564251-m02)   <features>
	I0721 23:41:31.085376   23196 main.go:141] libmachine: (ha-564251-m02)     <acpi/>
	I0721 23:41:31.085385   23196 main.go:141] libmachine: (ha-564251-m02)     <apic/>
	I0721 23:41:31.085395   23196 main.go:141] libmachine: (ha-564251-m02)     <pae/>
	I0721 23:41:31.085405   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085418   23196 main.go:141] libmachine: (ha-564251-m02)   </features>
	I0721 23:41:31.085433   23196 main.go:141] libmachine: (ha-564251-m02)   <cpu mode='host-passthrough'>
	I0721 23:41:31.085444   23196 main.go:141] libmachine: (ha-564251-m02)   
	I0721 23:41:31.085452   23196 main.go:141] libmachine: (ha-564251-m02)   </cpu>
	I0721 23:41:31.085463   23196 main.go:141] libmachine: (ha-564251-m02)   <os>
	I0721 23:41:31.085470   23196 main.go:141] libmachine: (ha-564251-m02)     <type>hvm</type>
	I0721 23:41:31.085480   23196 main.go:141] libmachine: (ha-564251-m02)     <boot dev='cdrom'/>
	I0721 23:41:31.085503   23196 main.go:141] libmachine: (ha-564251-m02)     <boot dev='hd'/>
	I0721 23:41:31.085515   23196 main.go:141] libmachine: (ha-564251-m02)     <bootmenu enable='no'/>
	I0721 23:41:31.085524   23196 main.go:141] libmachine: (ha-564251-m02)   </os>
	I0721 23:41:31.085530   23196 main.go:141] libmachine: (ha-564251-m02)   <devices>
	I0721 23:41:31.085543   23196 main.go:141] libmachine: (ha-564251-m02)     <disk type='file' device='cdrom'>
	I0721 23:41:31.085556   23196 main.go:141] libmachine: (ha-564251-m02)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/boot2docker.iso'/>
	I0721 23:41:31.085568   23196 main.go:141] libmachine: (ha-564251-m02)       <target dev='hdc' bus='scsi'/>
	I0721 23:41:31.085576   23196 main.go:141] libmachine: (ha-564251-m02)       <readonly/>
	I0721 23:41:31.085590   23196 main.go:141] libmachine: (ha-564251-m02)     </disk>
	I0721 23:41:31.085601   23196 main.go:141] libmachine: (ha-564251-m02)     <disk type='file' device='disk'>
	I0721 23:41:31.085615   23196 main.go:141] libmachine: (ha-564251-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:41:31.085626   23196 main.go:141] libmachine: (ha-564251-m02)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/ha-564251-m02.rawdisk'/>
	I0721 23:41:31.085638   23196 main.go:141] libmachine: (ha-564251-m02)       <target dev='hda' bus='virtio'/>
	I0721 23:41:31.085648   23196 main.go:141] libmachine: (ha-564251-m02)     </disk>
	I0721 23:41:31.085657   23196 main.go:141] libmachine: (ha-564251-m02)     <interface type='network'>
	I0721 23:41:31.085667   23196 main.go:141] libmachine: (ha-564251-m02)       <source network='mk-ha-564251'/>
	I0721 23:41:31.085674   23196 main.go:141] libmachine: (ha-564251-m02)       <model type='virtio'/>
	I0721 23:41:31.085683   23196 main.go:141] libmachine: (ha-564251-m02)     </interface>
	I0721 23:41:31.085690   23196 main.go:141] libmachine: (ha-564251-m02)     <interface type='network'>
	I0721 23:41:31.085704   23196 main.go:141] libmachine: (ha-564251-m02)       <source network='default'/>
	I0721 23:41:31.085715   23196 main.go:141] libmachine: (ha-564251-m02)       <model type='virtio'/>
	I0721 23:41:31.085725   23196 main.go:141] libmachine: (ha-564251-m02)     </interface>
	I0721 23:41:31.085733   23196 main.go:141] libmachine: (ha-564251-m02)     <serial type='pty'>
	I0721 23:41:31.085743   23196 main.go:141] libmachine: (ha-564251-m02)       <target port='0'/>
	I0721 23:41:31.085751   23196 main.go:141] libmachine: (ha-564251-m02)     </serial>
	I0721 23:41:31.085759   23196 main.go:141] libmachine: (ha-564251-m02)     <console type='pty'>
	I0721 23:41:31.085771   23196 main.go:141] libmachine: (ha-564251-m02)       <target type='serial' port='0'/>
	I0721 23:41:31.085781   23196 main.go:141] libmachine: (ha-564251-m02)     </console>
	I0721 23:41:31.085805   23196 main.go:141] libmachine: (ha-564251-m02)     <rng model='virtio'>
	I0721 23:41:31.085823   23196 main.go:141] libmachine: (ha-564251-m02)       <backend model='random'>/dev/random</backend>
	I0721 23:41:31.085836   23196 main.go:141] libmachine: (ha-564251-m02)     </rng>
	I0721 23:41:31.085846   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085854   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085864   23196 main.go:141] libmachine: (ha-564251-m02)   </devices>
	I0721 23:41:31.085872   23196 main.go:141] libmachine: (ha-564251-m02) </domain>
	I0721 23:41:31.085881   23196 main.go:141] libmachine: (ha-564251-m02) 
	I0721 23:41:31.092166   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:91:eb:c9 in network default
	I0721 23:41:31.092648   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring networks are active...
	I0721 23:41:31.092671   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:31.093348   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring network default is active
	I0721 23:41:31.093652   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring network mk-ha-564251 is active
	I0721 23:41:31.093972   23196 main.go:141] libmachine: (ha-564251-m02) Getting domain xml...
	I0721 23:41:31.094686   23196 main.go:141] libmachine: (ha-564251-m02) Creating domain...
	I0721 23:41:32.308261   23196 main.go:141] libmachine: (ha-564251-m02) Waiting to get IP...
	I0721 23:41:32.309190   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.309536   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.309560   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.309517   23576 retry.go:31] will retry after 279.941039ms: waiting for machine to come up
	I0721 23:41:32.590998   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.591342   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.591371   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.591289   23576 retry.go:31] will retry after 273.960435ms: waiting for machine to come up
	I0721 23:41:32.866931   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.867402   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.867426   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.867369   23576 retry.go:31] will retry after 384.003174ms: waiting for machine to come up
	I0721 23:41:33.252760   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:33.253210   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:33.253232   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:33.253160   23576 retry.go:31] will retry after 437.950795ms: waiting for machine to come up
	I0721 23:41:33.692821   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:33.693233   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:33.693258   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:33.693180   23576 retry.go:31] will retry after 658.15435ms: waiting for machine to come up
	I0721 23:41:34.353216   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:34.353605   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:34.353628   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:34.353550   23576 retry.go:31] will retry after 893.609942ms: waiting for machine to come up
	I0721 23:41:35.248776   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:35.249208   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:35.249231   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:35.249177   23576 retry.go:31] will retry after 1.020462835s: waiting for machine to come up
	I0721 23:41:36.271363   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:36.271841   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:36.271876   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:36.271785   23576 retry.go:31] will retry after 1.308791009s: waiting for machine to come up
	I0721 23:41:37.581782   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:37.582248   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:37.582278   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:37.582175   23576 retry.go:31] will retry after 1.458259843s: waiting for machine to come up
	I0721 23:41:39.042669   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:39.043011   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:39.043055   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:39.042963   23576 retry.go:31] will retry after 1.628790411s: waiting for machine to come up
	I0721 23:41:40.673608   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:40.674113   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:40.674138   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:40.674037   23576 retry.go:31] will retry after 2.891000365s: waiting for machine to come up
	I0721 23:41:43.566289   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:43.566794   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:43.566820   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:43.566748   23576 retry.go:31] will retry after 3.017497145s: waiting for machine to come up
	I0721 23:41:46.585567   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:46.585983   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:46.586010   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:46.585943   23576 retry.go:31] will retry after 4.417647061s: waiting for machine to come up
	I0721 23:41:51.005071   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.005507   23196 main.go:141] libmachine: (ha-564251-m02) Found IP for machine: 192.168.39.202
	I0721 23:41:51.005535   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has current primary IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.005544   23196 main.go:141] libmachine: (ha-564251-m02) Reserving static IP address...
	I0721 23:41:51.005920   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find host DHCP lease matching {name: "ha-564251-m02", mac: "52:54:00:38:f8:82", ip: "192.168.39.202"} in network mk-ha-564251
	I0721 23:41:51.075991   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Getting to WaitForSSH function...
	I0721 23:41:51.076035   23196 main.go:141] libmachine: (ha-564251-m02) Reserved static IP address: 192.168.39.202
	I0721 23:41:51.076050   23196 main.go:141] libmachine: (ha-564251-m02) Waiting for SSH to be available...
	I0721 23:41:51.078414   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.078825   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.078855   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.078949   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using SSH client type: external
	I0721 23:41:51.078970   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa (-rw-------)
	I0721 23:41:51.078995   23196 main.go:141] libmachine: (ha-564251-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:41:51.079009   23196 main.go:141] libmachine: (ha-564251-m02) DBG | About to run SSH command:
	I0721 23:41:51.079024   23196 main.go:141] libmachine: (ha-564251-m02) DBG | exit 0
	I0721 23:41:51.206770   23196 main.go:141] libmachine: (ha-564251-m02) DBG | SSH cmd err, output: <nil>: 
	I0721 23:41:51.206977   23196 main.go:141] libmachine: (ha-564251-m02) KVM machine creation complete!
	I0721 23:41:51.207321   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:51.207919   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:51.208096   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:51.208248   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:41:51.208265   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:41:51.209635   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:41:51.209650   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:41:51.209664   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:41:51.209676   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.212146   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.212578   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.212603   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.212780   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.212942   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.213098   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.213216   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.213384   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.213576   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.213588   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:41:51.325723   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:51.325745   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:41:51.325773   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.328472   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.328853   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.328881   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.328963   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.329128   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.329296   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.329445   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.329591   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.329767   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.329781   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:41:51.439120   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:41:51.439200   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:41:51.439211   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:41:51.439224   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.439507   23196 buildroot.go:166] provisioning hostname "ha-564251-m02"
	I0721 23:41:51.439529   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.439725   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.442124   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.442501   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.442536   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.442671   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.442847   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.443009   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.443198   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.443385   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.443600   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.443613   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251-m02 && echo "ha-564251-m02" | sudo tee /etc/hostname
	I0721 23:41:51.563554   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251-m02
	
	I0721 23:41:51.563586   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.566345   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.566765   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.566793   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.566949   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.567120   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.567292   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.567459   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.567583   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.567731   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.567746   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:41:51.686398   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:51.686425   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:41:51.686443   23196 buildroot.go:174] setting up certificates
	I0721 23:41:51.686451   23196 provision.go:84] configureAuth start
	I0721 23:41:51.686460   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.686809   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:51.689485   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.689782   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.689809   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.690002   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.692216   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.692584   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.692610   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.692745   23196 provision.go:143] copyHostCerts
	I0721 23:41:51.692783   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:51.692812   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:41:51.692820   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:51.692884   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:41:51.692964   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:51.692981   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:41:51.692987   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:51.693010   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:41:51.693061   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:51.693077   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:41:51.693081   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:51.693100   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:41:51.693156   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251-m02 san=[127.0.0.1 192.168.39.202 ha-564251-m02 localhost minikube]
	I0721 23:41:51.755558   23196 provision.go:177] copyRemoteCerts
	I0721 23:41:51.755608   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:41:51.755634   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.758285   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.758634   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.758658   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.758847   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.759014   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.759144   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.759245   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:51.844033   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:41:51.844108   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:41:51.867176   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:41:51.867228   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:41:51.888974   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:41:51.889030   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:41:51.910077   23196 provision.go:87] duration metric: took 223.613935ms to configureAuth
	I0721 23:41:51.910101   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:41:51.910281   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:51.910377   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.913029   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.913307   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.913334   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.913488   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.913621   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.913718   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.913790   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.913942   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.914083   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.914095   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:41:52.180201   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:41:52.180229   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:41:52.180238   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetURL
	I0721 23:41:52.181546   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using libvirt version 6000000
	I0721 23:41:52.183518   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.183824   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.183845   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.183983   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:41:52.184001   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:41:52.184013   23196 client.go:171] duration metric: took 21.364444929s to LocalClient.Create
	I0721 23:41:52.184042   23196 start.go:167] duration metric: took 21.364519572s to libmachine.API.Create "ha-564251"
	I0721 23:41:52.184054   23196 start.go:293] postStartSetup for "ha-564251-m02" (driver="kvm2")
	I0721 23:41:52.184066   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:41:52.184093   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.184318   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:41:52.184338   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.186492   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.186805   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.186873   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.186944   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.187195   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.187349   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.187486   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.272188   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:41:52.275999   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:41:52.276022   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:41:52.276086   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:41:52.276168   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:41:52.276179   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:41:52.276279   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:41:52.284945   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:52.306026   23196 start.go:296] duration metric: took 121.960075ms for postStartSetup
	I0721 23:41:52.306075   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:52.306683   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:52.309314   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.309643   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.309671   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.309870   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:52.310034   23196 start.go:128] duration metric: took 21.509168801s to createHost
	I0721 23:41:52.310055   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.312372   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.312732   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.312758   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.312846   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.313030   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.313176   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.313288   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.313451   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:52.313603   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:52.313613   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:41:52.422971   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605312.384668670
	
	I0721 23:41:52.422996   23196 fix.go:216] guest clock: 1721605312.384668670
	I0721 23:41:52.423004   23196 fix.go:229] Guest: 2024-07-21 23:41:52.38466867 +0000 UTC Remote: 2024-07-21 23:41:52.310044935 +0000 UTC m=+71.796673073 (delta=74.623735ms)
	I0721 23:41:52.423016   23196 fix.go:200] guest clock delta is within tolerance: 74.623735ms
	I0721 23:41:52.423021   23196 start.go:83] releasing machines lock for "ha-564251-m02", held for 21.622289193s
	I0721 23:41:52.423039   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.423338   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:52.425783   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.426046   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.426069   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.428421   23196 out.go:177] * Found network options:
	I0721 23:41:52.429810   23196 out.go:177]   - NO_PROXY=192.168.39.91
	W0721 23:41:52.431059   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:41:52.431089   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431611   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431829   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431925   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:41:52.431960   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	W0721 23:41:52.432043   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:41:52.432125   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:41:52.432148   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.434775   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435025   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435195   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.435224   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435352   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.435461   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.435486   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435537   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.435607   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.435675   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.435759   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.435823   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.435919   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.436051   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.668235   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:41:52.673505   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:41:52.673555   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:41:52.689044   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:41:52.689060   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:41:52.689109   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:41:52.703951   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:41:52.717029   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:41:52.717089   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:41:52.730341   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:41:52.743683   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:41:52.852147   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:41:52.991439   23196 docker.go:233] disabling docker service ...
	I0721 23:41:52.991501   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:41:53.005176   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:41:53.017426   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:41:53.149184   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:41:53.253962   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:41:53.266638   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:41:53.285081   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:41:53.285147   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.294456   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:41:53.294518   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.304023   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.313431   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.323972   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:41:53.333492   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.342713   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.358065   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.367571   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:41:53.376039   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:41:53.376091   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:41:53.387243   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:41:53.396362   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:53.500320   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:41:53.631312   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:41:53.631382   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:41:53.635842   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:41:53.635905   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:41:53.639388   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:41:53.680490   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:41:53.680577   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:53.706998   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:53.735897   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:41:53.737231   23196 out.go:177]   - env NO_PROXY=192.168.39.91
	I0721 23:41:53.738546   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:53.741241   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:53.741622   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:53.741649   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:53.741830   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:41:53.745640   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:53.757594   23196 mustload.go:65] Loading cluster: ha-564251
	I0721 23:41:53.757751   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:53.757983   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:53.758015   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:53.773453   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0721 23:41:53.773841   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:53.774308   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:53.774330   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:53.774705   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:53.774900   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:53.776562   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:53.776847   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:53.776888   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:53.791078   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0721 23:41:53.791437   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:53.791839   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:53.791859   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:53.792147   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:53.792495   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:53.792646   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.202
	I0721 23:41:53.792658   23196 certs.go:194] generating shared ca certs ...
	I0721 23:41:53.792671   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:53.792778   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:41:53.792812   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:41:53.792820   23196 certs.go:256] generating profile certs ...
	I0721 23:41:53.792910   23196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:41:53.792937   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf
	I0721 23:41:53.792948   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.254]
	I0721 23:41:54.020469   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf ...
	I0721 23:41:54.020494   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf: {Name:mk0d4d16dfd271a385f6ab767cfa09f740f8d565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:54.020652   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf ...
	I0721 23:41:54.020665   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf: {Name:mk96eec0984ded953402c5b044b0f82745c535b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:54.020731   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:41:54.020855   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:41:54.020970   23196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:41:54.020985   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:41:54.020997   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:41:54.021010   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:41:54.021023   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:41:54.021035   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:41:54.021048   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:41:54.021059   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:41:54.021071   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:41:54.021111   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:41:54.021136   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:41:54.021145   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:41:54.021164   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:41:54.021184   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:41:54.021204   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:41:54.021238   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:54.021264   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.021277   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.021290   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.021319   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:54.023945   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:54.024508   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:54.024538   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:54.024735   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:54.024946   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:54.025128   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:54.025257   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:54.094999   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0721 23:41:54.099479   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0721 23:41:54.109463   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0721 23:41:54.113544   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0721 23:41:54.122906   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0721 23:41:54.126673   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0721 23:41:54.136429   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0721 23:41:54.139970   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0721 23:41:54.149258   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0721 23:41:54.152853   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0721 23:41:54.161904   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0721 23:41:54.165554   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0721 23:41:54.174669   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:41:54.199080   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:41:54.223014   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:41:54.246728   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:41:54.270483   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0721 23:41:54.291692   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:41:54.312692   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:41:54.333545   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:41:54.354460   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:41:54.375476   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:41:54.396228   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:41:54.417338   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0721 23:41:54.433622   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0721 23:41:54.450100   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0721 23:41:54.466106   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0721 23:41:54.482430   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0721 23:41:54.498541   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0721 23:41:54.513446   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0721 23:41:54.528753   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:41:54.533953   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:41:54.543439   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.547394   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.547436   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.552691   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:41:54.562035   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:41:54.572210   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.575964   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.576016   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.580923   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:41:54.590450   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:41:54.600593   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.604659   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.604693   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.609777   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:41:54.620009   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:41:54.623546   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:41:54.623592   23196 kubeadm.go:934] updating node {m02 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0721 23:41:54.623672   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:41:54.623695   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:41:54.623726   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:41:54.646367   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:41:54.646418   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:41:54.646459   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:54.658093   23196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0721 23:41:54.658134   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:54.666905   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0721 23:41:54.666929   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:41:54.666970   23196 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0721 23:41:54.667001   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:41:54.667008   23196 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0721 23:41:54.670824   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0721 23:41:54.670853   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0721 23:41:55.493266   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:41:55.493355   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:41:55.497798   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0721 23:41:55.497827   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0721 23:41:55.666177   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:41:55.699325   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:41:55.699430   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:41:55.711440   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0721 23:41:55.711478   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0721 23:41:56.088381   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0721 23:41:56.097283   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0721 23:41:56.112806   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:41:56.127525   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:41:56.142595   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:41:56.145949   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:56.156798   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:56.258151   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:41:56.273277   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:56.273786   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:56.273847   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:56.291329   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37707
	I0721 23:41:56.291911   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:56.292375   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:56.292395   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:56.292729   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:56.292917   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:56.293055   23196 start.go:317] joinCluster: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:41:56.293140   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0721 23:41:56.293155   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:56.296437   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:56.296935   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:56.296965   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:56.297153   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:56.297332   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:56.297500   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:56.297629   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:56.440022   23196 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:56.440065   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 08e5ji.aajvcalhdut83cxr --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0721 23:42:19.196999   23196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 08e5ji.aajvcalhdut83cxr --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (22.756910365s)
	I0721 23:42:19.197038   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0721 23:42:19.740638   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251-m02 minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=false
	I0721 23:42:19.851899   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-564251-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0721 23:42:19.983706   23196 start.go:319] duration metric: took 23.690643373s to joinCluster
	I0721 23:42:19.983780   23196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:42:19.984067   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:42:19.985799   23196 out.go:177] * Verifying Kubernetes components...
	I0721 23:42:19.986844   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:42:20.243378   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:42:20.316427   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:42:20.316792   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0721 23:42:20.316877   23196 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.91:8443
	I0721 23:42:20.317156   23196 node_ready.go:35] waiting up to 6m0s for node "ha-564251-m02" to be "Ready" ...
	I0721 23:42:20.317269   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:20.317282   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:20.317292   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:20.317296   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:20.336442   23196 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0721 23:42:20.818326   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:20.818348   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:20.818361   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:20.818367   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:20.821723   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:21.317491   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:21.317510   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:21.317518   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:21.317521   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:21.322410   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:21.818257   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:21.818276   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:21.818284   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:21.818288   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:21.821223   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:22.318085   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:22.318112   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:22.318121   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:22.318135   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:22.321462   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:22.322038   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:22.817369   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:22.817403   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:22.817411   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:22.817414   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:22.821429   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:23.317419   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:23.317438   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:23.317446   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:23.317449   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:23.320648   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:23.818236   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:23.818261   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:23.818273   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:23.818281   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:23.821320   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:24.318177   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:24.318198   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:24.318206   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:24.318212   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:24.321794   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:24.322590   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:24.817928   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:24.817953   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:24.817964   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:24.817970   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:24.822397   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:25.317695   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:25.317717   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:25.317727   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:25.317733   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:25.320800   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:25.818263   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:25.818287   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:25.818305   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:25.818310   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:25.821480   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:26.317875   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:26.317899   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:26.317910   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:26.317915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:26.321277   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:26.818278   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:26.818296   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:26.818303   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:26.818306   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:26.822817   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:26.823289   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:27.317434   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:27.317456   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:27.317463   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:27.317467   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:27.320759   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:27.817671   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:27.817690   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:27.817698   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:27.817703   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:27.820392   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:28.317755   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:28.317777   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:28.317785   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:28.317789   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:28.320846   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:28.818065   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:28.818083   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:28.818091   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:28.818095   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:28.821179   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:29.318242   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:29.318268   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:29.318279   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:29.318287   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:29.356069   23196 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0721 23:42:29.356708   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:29.817972   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:29.817995   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:29.818003   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:29.818009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:29.820909   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:30.317373   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:30.317396   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:30.317404   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:30.317408   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:30.320266   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:30.817493   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:30.817513   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:30.817522   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:30.817526   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:30.820482   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:31.317562   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:31.317597   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:31.317608   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:31.317613   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:31.321817   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:31.817643   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:31.817666   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:31.817677   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:31.817683   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:31.820508   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:31.821098   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:32.317456   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:32.317476   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:32.317484   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:32.317488   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:32.322017   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:32.818057   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:32.818076   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:32.818084   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:32.818089   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:32.821032   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:33.318322   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:33.318349   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:33.318359   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:33.318366   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:33.321755   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:33.817734   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:33.817751   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:33.817760   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:33.817763   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:33.821052   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:33.821766   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:34.318206   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:34.318226   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:34.318233   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:34.318237   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:34.321495   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:34.817545   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:34.817579   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:34.817590   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:34.817595   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:34.820807   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:35.317762   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:35.317787   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:35.317798   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:35.317803   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:35.320872   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:35.818257   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:35.818274   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:35.818282   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:35.818287   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:35.821211   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:35.821933   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:36.318144   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:36.318164   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:36.318171   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:36.318176   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:36.321896   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:36.817764   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:36.817784   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:36.817793   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:36.817797   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:36.821184   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:37.318365   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.318395   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.318407   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.318417   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.322141   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:37.818241   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.818261   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.818271   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.818275   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.821251   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.821828   23196 node_ready.go:49] node "ha-564251-m02" has status "Ready":"True"
	I0721 23:42:37.821851   23196 node_ready.go:38] duration metric: took 17.504666665s for node "ha-564251-m02" to be "Ready" ...
	I0721 23:42:37.821862   23196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:42:37.821933   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:37.821945   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.821956   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.821966   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.831685   23196 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0721 23:42:37.837771   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.837841   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bsbzk
	I0721 23:42:37.837849   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.837857   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.837862   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.840272   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.840792   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.840805   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.840812   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.840816   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.843255   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.843999   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.844022   23196 pod_ready.go:81] duration metric: took 6.228906ms for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.844034   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.844092   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f4lqn
	I0721 23:42:37.844100   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.844107   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.844111   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.846712   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.847698   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.847717   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.847727   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.847732   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.849786   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.850537   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.850555   23196 pod_ready.go:81] duration metric: took 6.509196ms for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.850570   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.850638   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251
	I0721 23:42:37.850649   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.850659   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.850665   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.852494   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.853048   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.853064   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.853074   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.853079   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.855065   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.855808   23196 pod_ready.go:92] pod "etcd-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.855823   23196 pod_ready.go:81] duration metric: took 5.24199ms for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.855833   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.855886   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m02
	I0721 23:42:37.855895   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.855905   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.855915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.857862   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.858236   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.858248   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.858256   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.858263   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.860300   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.860668   23196 pod_ready.go:92] pod "etcd-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.860682   23196 pod_ready.go:81] duration metric: took 4.841194ms for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.860697   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.019092   23196 request.go:629] Waited for 158.334528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:42:38.019148   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:42:38.019153   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.019160   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.019164   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.022158   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.219035   23196 request.go:629] Waited for 196.175145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:38.219084   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:38.219090   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.219098   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.219103   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.221664   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.222235   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:38.222261   23196 pod_ready.go:81] duration metric: took 361.557372ms for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.222285   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.418315   23196 request.go:629] Waited for 195.950584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:42:38.418385   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:42:38.418390   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.418398   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.418403   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.421696   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:38.618798   23196 request.go:629] Waited for 196.383684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:38.618866   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:38.618871   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.618879   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.618882   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.621356   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.621824   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:38.621842   23196 pod_ready.go:81] duration metric: took 399.547546ms for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.621852   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.818875   23196 request.go:629] Waited for 196.950973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:42:38.818937   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:42:38.818945   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.818954   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.818959   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.822032   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.018894   23196 request.go:629] Waited for 196.348282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:39.018978   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:39.018988   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.018993   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.018996   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.022059   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.022723   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.022743   23196 pod_ready.go:81] duration metric: took 400.884512ms for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.022755   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.218690   23196 request.go:629] Waited for 195.869375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:42:39.218762   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:42:39.218768   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.218783   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.218791   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.221688   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:39.418697   23196 request.go:629] Waited for 196.395764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.418770   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.418777   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.418789   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.418799   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.422125   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.422933   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.422954   23196 pod_ready.go:81] duration metric: took 400.191219ms for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.422965   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.619086   23196 request.go:629] Waited for 196.046312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:42:39.619141   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:42:39.619147   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.619161   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.619166   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.622167   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:39.819218   23196 request.go:629] Waited for 196.352929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.819278   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.819283   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.819290   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.819294   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.822488   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.822925   23196 pod_ready.go:92] pod "kube-proxy-8c6vn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.822941   23196 pod_ready.go:81] duration metric: took 399.970562ms for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.822953   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.019101   23196 request.go:629] Waited for 196.083444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:42:40.019154   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:42:40.019162   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.019169   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.019175   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.022507   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:40.218320   23196 request.go:629] Waited for 195.279025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.218399   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.218405   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.218412   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.218416   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.221318   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.221883   23196 pod_ready.go:92] pod "kube-proxy-srpl8" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:40.221903   23196 pod_ready.go:81] duration metric: took 398.939079ms for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.221912   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.418974   23196 request.go:629] Waited for 196.993765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:42:40.419033   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:42:40.419037   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.419045   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.419048   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.422045   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.618865   23196 request.go:629] Waited for 196.30454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.618925   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.618930   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.618938   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.618942   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.621851   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.622454   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:40.622473   23196 pod_ready.go:81] duration metric: took 400.554697ms for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.622486   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.818777   23196 request.go:629] Waited for 196.209908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:42:40.818841   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:42:40.818846   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.818852   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.818858   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.821719   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.018703   23196 request.go:629] Waited for 196.316562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:41.018752   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:41.018757   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.018765   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.018769   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.021756   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.022313   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:41.022331   23196 pod_ready.go:81] duration metric: took 399.837433ms for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:41.022341   23196 pod_ready.go:38] duration metric: took 3.200465942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:42:41.022357   23196 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:42:41.022414   23196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:42:41.039081   23196 api_server.go:72] duration metric: took 21.055262783s to wait for apiserver process to appear ...
	I0721 23:42:41.039099   23196 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:42:41.039115   23196 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0721 23:42:41.043473   23196 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0721 23:42:41.043527   23196 round_trippers.go:463] GET https://192.168.39.91:8443/version
	I0721 23:42:41.043532   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.043540   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.043545   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.044552   23196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0721 23:42:41.044631   23196 api_server.go:141] control plane version: v1.30.3
	I0721 23:42:41.044646   23196 api_server.go:131] duration metric: took 5.540863ms to wait for apiserver health ...
	I0721 23:42:41.044652   23196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:42:41.219082   23196 request.go:629] Waited for 174.361325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.219145   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.219153   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.219162   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.219171   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.224530   23196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0721 23:42:41.228521   23196 system_pods.go:59] 17 kube-system pods found
	I0721 23:42:41.228549   23196 system_pods.go:61] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:42:41.228555   23196 system_pods.go:61] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:42:41.228558   23196 system_pods.go:61] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:42:41.228561   23196 system_pods.go:61] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:42:41.228564   23196 system_pods.go:61] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:42:41.228567   23196 system_pods.go:61] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:42:41.228572   23196 system_pods.go:61] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:42:41.228575   23196 system_pods.go:61] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:42:41.228578   23196 system_pods.go:61] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:42:41.228581   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:42:41.228584   23196 system_pods.go:61] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:42:41.228586   23196 system_pods.go:61] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:42:41.228589   23196 system_pods.go:61] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:42:41.228592   23196 system_pods.go:61] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:42:41.228596   23196 system_pods.go:61] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:42:41.228599   23196 system_pods.go:61] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:42:41.228602   23196 system_pods.go:61] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:42:41.228607   23196 system_pods.go:74] duration metric: took 183.949996ms to wait for pod list to return data ...
	I0721 23:42:41.228615   23196 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:42:41.418917   23196 request.go:629] Waited for 190.227355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:42:41.418994   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:42:41.419003   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.419015   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.419026   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.422128   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:41.422375   23196 default_sa.go:45] found service account: "default"
	I0721 23:42:41.422390   23196 default_sa.go:55] duration metric: took 193.76933ms for default service account to be created ...
	I0721 23:42:41.422397   23196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:42:41.618838   23196 request.go:629] Waited for 196.378448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.618890   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.618901   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.618914   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.618918   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.625681   23196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0721 23:42:41.630548   23196 system_pods.go:86] 17 kube-system pods found
	I0721 23:42:41.630570   23196 system_pods.go:89] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:42:41.630575   23196 system_pods.go:89] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:42:41.630580   23196 system_pods.go:89] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:42:41.630583   23196 system_pods.go:89] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:42:41.630588   23196 system_pods.go:89] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:42:41.630591   23196 system_pods.go:89] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:42:41.630596   23196 system_pods.go:89] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:42:41.630618   23196 system_pods.go:89] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:42:41.630625   23196 system_pods.go:89] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:42:41.630637   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:42:41.630644   23196 system_pods.go:89] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:42:41.630651   23196 system_pods.go:89] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:42:41.630655   23196 system_pods.go:89] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:42:41.630660   23196 system_pods.go:89] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:42:41.630664   23196 system_pods.go:89] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:42:41.630668   23196 system_pods.go:89] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:42:41.630671   23196 system_pods.go:89] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:42:41.630678   23196 system_pods.go:126] duration metric: took 208.276125ms to wait for k8s-apps to be running ...
	I0721 23:42:41.630688   23196 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:42:41.630736   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:42:41.645823   23196 system_svc.go:56] duration metric: took 15.126226ms WaitForService to wait for kubelet
	I0721 23:42:41.645852   23196 kubeadm.go:582] duration metric: took 21.662036695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:42:41.645871   23196 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:42:41.818236   23196 request.go:629] Waited for 172.279493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes
	I0721 23:42:41.818308   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes
	I0721 23:42:41.818316   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.818330   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.818340   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.821351   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.822193   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:42:41.822215   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:42:41.822226   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:42:41.822229   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:42:41.822236   23196 node_conditions.go:105] duration metric: took 176.359038ms to run NodePressure ...
	I0721 23:42:41.822249   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:42:41.822275   23196 start.go:255] writing updated cluster config ...
	I0721 23:42:41.823794   23196 out.go:177] 
	I0721 23:42:41.825017   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:42:41.825098   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:42:41.826638   23196 out.go:177] * Starting "ha-564251-m03" control-plane node in "ha-564251" cluster
	I0721 23:42:41.827867   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:42:41.827890   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:42:41.827972   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:42:41.827982   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:42:41.828071   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:42:41.828233   23196 start.go:360] acquireMachinesLock for ha-564251-m03: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:42:41.828272   23196 start.go:364] duration metric: took 22.9µs to acquireMachinesLock for "ha-564251-m03"
	I0721 23:42:41.828292   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:42:41.828373   23196 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0721 23:42:41.829697   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:42:41.829778   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:42:41.829810   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:42:41.848164   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0721 23:42:41.848575   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:42:41.849081   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:42:41.849103   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:42:41.849379   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:42:41.849650   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:42:41.849794   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:42:41.849971   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:42:41.850001   23196 client.go:168] LocalClient.Create starting
	I0721 23:42:41.850035   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:42:41.850072   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:42:41.850096   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:42:41.850160   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:42:41.850185   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:42:41.850200   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:42:41.850226   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:42:41.850237   23196 main.go:141] libmachine: (ha-564251-m03) Calling .PreCreateCheck
	I0721 23:42:41.850384   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:42:41.850738   23196 main.go:141] libmachine: Creating machine...
	I0721 23:42:41.850753   23196 main.go:141] libmachine: (ha-564251-m03) Calling .Create
	I0721 23:42:41.850914   23196 main.go:141] libmachine: (ha-564251-m03) Creating KVM machine...
	I0721 23:42:41.852185   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found existing default KVM network
	I0721 23:42:41.852347   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found existing private KVM network mk-ha-564251
	I0721 23:42:41.852434   23196 main.go:141] libmachine: (ha-564251-m03) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 ...
	I0721 23:42:41.852451   23196 main.go:141] libmachine: (ha-564251-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:42:41.852505   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:41.852428   23971 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:42:41.852612   23196 main.go:141] libmachine: (ha-564251-m03) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:42:42.078170   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.078041   23971 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa...
	I0721 23:42:42.263096   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.262983   23971 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/ha-564251-m03.rawdisk...
	I0721 23:42:42.263131   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Writing magic tar header
	I0721 23:42:42.263145   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Writing SSH key tar header
	I0721 23:42:42.263156   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.263093   23971 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 ...
	I0721 23:42:42.263235   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03
	I0721 23:42:42.263265   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:42:42.263277   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 (perms=drwx------)
	I0721 23:42:42.263288   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:42:42.263296   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:42:42.263307   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:42:42.263319   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:42:42.263334   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:42:42.263346   23196 main.go:141] libmachine: (ha-564251-m03) Creating domain...
	I0721 23:42:42.263363   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:42:42.263377   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:42:42.263385   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:42:42.263394   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:42:42.263407   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home
	I0721 23:42:42.263423   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Skipping /home - not owner
	I0721 23:42:42.264408   23196 main.go:141] libmachine: (ha-564251-m03) define libvirt domain using xml: 
	I0721 23:42:42.264426   23196 main.go:141] libmachine: (ha-564251-m03) <domain type='kvm'>
	I0721 23:42:42.264434   23196 main.go:141] libmachine: (ha-564251-m03)   <name>ha-564251-m03</name>
	I0721 23:42:42.264442   23196 main.go:141] libmachine: (ha-564251-m03)   <memory unit='MiB'>2200</memory>
	I0721 23:42:42.264448   23196 main.go:141] libmachine: (ha-564251-m03)   <vcpu>2</vcpu>
	I0721 23:42:42.264453   23196 main.go:141] libmachine: (ha-564251-m03)   <features>
	I0721 23:42:42.264466   23196 main.go:141] libmachine: (ha-564251-m03)     <acpi/>
	I0721 23:42:42.264477   23196 main.go:141] libmachine: (ha-564251-m03)     <apic/>
	I0721 23:42:42.264486   23196 main.go:141] libmachine: (ha-564251-m03)     <pae/>
	I0721 23:42:42.264494   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.264502   23196 main.go:141] libmachine: (ha-564251-m03)   </features>
	I0721 23:42:42.264508   23196 main.go:141] libmachine: (ha-564251-m03)   <cpu mode='host-passthrough'>
	I0721 23:42:42.264530   23196 main.go:141] libmachine: (ha-564251-m03)   
	I0721 23:42:42.264550   23196 main.go:141] libmachine: (ha-564251-m03)   </cpu>
	I0721 23:42:42.264563   23196 main.go:141] libmachine: (ha-564251-m03)   <os>
	I0721 23:42:42.264574   23196 main.go:141] libmachine: (ha-564251-m03)     <type>hvm</type>
	I0721 23:42:42.264585   23196 main.go:141] libmachine: (ha-564251-m03)     <boot dev='cdrom'/>
	I0721 23:42:42.264596   23196 main.go:141] libmachine: (ha-564251-m03)     <boot dev='hd'/>
	I0721 23:42:42.264609   23196 main.go:141] libmachine: (ha-564251-m03)     <bootmenu enable='no'/>
	I0721 23:42:42.264622   23196 main.go:141] libmachine: (ha-564251-m03)   </os>
	I0721 23:42:42.264630   23196 main.go:141] libmachine: (ha-564251-m03)   <devices>
	I0721 23:42:42.264637   23196 main.go:141] libmachine: (ha-564251-m03)     <disk type='file' device='cdrom'>
	I0721 23:42:42.264669   23196 main.go:141] libmachine: (ha-564251-m03)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/boot2docker.iso'/>
	I0721 23:42:42.264693   23196 main.go:141] libmachine: (ha-564251-m03)       <target dev='hdc' bus='scsi'/>
	I0721 23:42:42.264705   23196 main.go:141] libmachine: (ha-564251-m03)       <readonly/>
	I0721 23:42:42.264716   23196 main.go:141] libmachine: (ha-564251-m03)     </disk>
	I0721 23:42:42.264729   23196 main.go:141] libmachine: (ha-564251-m03)     <disk type='file' device='disk'>
	I0721 23:42:42.264741   23196 main.go:141] libmachine: (ha-564251-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:42:42.264758   23196 main.go:141] libmachine: (ha-564251-m03)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/ha-564251-m03.rawdisk'/>
	I0721 23:42:42.264769   23196 main.go:141] libmachine: (ha-564251-m03)       <target dev='hda' bus='virtio'/>
	I0721 23:42:42.264780   23196 main.go:141] libmachine: (ha-564251-m03)     </disk>
	I0721 23:42:42.264793   23196 main.go:141] libmachine: (ha-564251-m03)     <interface type='network'>
	I0721 23:42:42.264805   23196 main.go:141] libmachine: (ha-564251-m03)       <source network='mk-ha-564251'/>
	I0721 23:42:42.264818   23196 main.go:141] libmachine: (ha-564251-m03)       <model type='virtio'/>
	I0721 23:42:42.264827   23196 main.go:141] libmachine: (ha-564251-m03)     </interface>
	I0721 23:42:42.264837   23196 main.go:141] libmachine: (ha-564251-m03)     <interface type='network'>
	I0721 23:42:42.264853   23196 main.go:141] libmachine: (ha-564251-m03)       <source network='default'/>
	I0721 23:42:42.264869   23196 main.go:141] libmachine: (ha-564251-m03)       <model type='virtio'/>
	I0721 23:42:42.264880   23196 main.go:141] libmachine: (ha-564251-m03)     </interface>
	I0721 23:42:42.264890   23196 main.go:141] libmachine: (ha-564251-m03)     <serial type='pty'>
	I0721 23:42:42.264901   23196 main.go:141] libmachine: (ha-564251-m03)       <target port='0'/>
	I0721 23:42:42.264911   23196 main.go:141] libmachine: (ha-564251-m03)     </serial>
	I0721 23:42:42.264920   23196 main.go:141] libmachine: (ha-564251-m03)     <console type='pty'>
	I0721 23:42:42.264932   23196 main.go:141] libmachine: (ha-564251-m03)       <target type='serial' port='0'/>
	I0721 23:42:42.264946   23196 main.go:141] libmachine: (ha-564251-m03)     </console>
	I0721 23:42:42.264960   23196 main.go:141] libmachine: (ha-564251-m03)     <rng model='virtio'>
	I0721 23:42:42.264977   23196 main.go:141] libmachine: (ha-564251-m03)       <backend model='random'>/dev/random</backend>
	I0721 23:42:42.264992   23196 main.go:141] libmachine: (ha-564251-m03)     </rng>
	I0721 23:42:42.265007   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.265017   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.265025   23196 main.go:141] libmachine: (ha-564251-m03)   </devices>
	I0721 23:42:42.265036   23196 main.go:141] libmachine: (ha-564251-m03) </domain>
	I0721 23:42:42.265045   23196 main.go:141] libmachine: (ha-564251-m03) 
	I0721 23:42:42.271675   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:a8:f0:9d in network default
	I0721 23:42:42.272233   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring networks are active...
	I0721 23:42:42.272255   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:42.272843   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring network default is active
	I0721 23:42:42.273161   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring network mk-ha-564251 is active
	I0721 23:42:42.273605   23196 main.go:141] libmachine: (ha-564251-m03) Getting domain xml...
	I0721 23:42:42.274281   23196 main.go:141] libmachine: (ha-564251-m03) Creating domain...
	I0721 23:42:43.487914   23196 main.go:141] libmachine: (ha-564251-m03) Waiting to get IP...
	I0721 23:42:43.488790   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:43.489358   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:43.489388   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:43.489330   23971 retry.go:31] will retry after 223.451018ms: waiting for machine to come up
	I0721 23:42:43.714689   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:43.715254   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:43.715278   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:43.715174   23971 retry.go:31] will retry after 313.245752ms: waiting for machine to come up
	I0721 23:42:44.029580   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.030002   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.030032   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.029965   23971 retry.go:31] will retry after 307.421104ms: waiting for machine to come up
	I0721 23:42:44.339408   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.339832   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.339858   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.339790   23971 retry.go:31] will retry after 576.381475ms: waiting for machine to come up
	I0721 23:42:44.917449   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.917865   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.917893   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.917814   23971 retry.go:31] will retry after 739.541484ms: waiting for machine to come up
	I0721 23:42:45.658321   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:45.658656   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:45.658686   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:45.658632   23971 retry.go:31] will retry after 914.474856ms: waiting for machine to come up
	I0721 23:42:46.575185   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:46.575583   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:46.575604   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:46.575528   23971 retry.go:31] will retry after 1.017323514s: waiting for machine to come up
	I0721 23:42:47.594012   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:47.594565   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:47.594597   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:47.594530   23971 retry.go:31] will retry after 1.289736101s: waiting for machine to come up
	I0721 23:42:48.885806   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:48.886172   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:48.886200   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:48.886116   23971 retry.go:31] will retry after 1.778438113s: waiting for machine to come up
	I0721 23:42:50.666535   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:50.666966   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:50.666985   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:50.666930   23971 retry.go:31] will retry after 2.194283655s: waiting for machine to come up
	I0721 23:42:52.862586   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:52.863048   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:52.863093   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:52.863023   23971 retry.go:31] will retry after 2.561837275s: waiting for machine to come up
	I0721 23:42:55.427865   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:55.428311   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:55.428337   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:55.428264   23971 retry.go:31] will retry after 3.567006608s: waiting for machine to come up
	I0721 23:42:58.997015   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:58.997369   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:58.997390   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:58.997349   23971 retry.go:31] will retry after 2.970832116s: waiting for machine to come up
	I0721 23:43:01.970081   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:01.970646   23196 main.go:141] libmachine: (ha-564251-m03) Found IP for machine: 192.168.39.89
	I0721 23:43:01.970673   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has current primary IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:01.970682   23196 main.go:141] libmachine: (ha-564251-m03) Reserving static IP address...
	I0721 23:43:01.971028   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find host DHCP lease matching {name: "ha-564251-m03", mac: "52:54:00:9c:e6:b3", ip: "192.168.39.89"} in network mk-ha-564251
	I0721 23:43:02.042727   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Getting to WaitForSSH function...
	I0721 23:43:02.042759   23196 main.go:141] libmachine: (ha-564251-m03) Reserved static IP address: 192.168.39.89
	I0721 23:43:02.042772   23196 main.go:141] libmachine: (ha-564251-m03) Waiting for SSH to be available...
	I0721 23:43:02.045758   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.046196   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.046225   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.046410   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using SSH client type: external
	I0721 23:43:02.046431   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa (-rw-------)
	I0721 23:43:02.046465   23196 main.go:141] libmachine: (ha-564251-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:43:02.046484   23196 main.go:141] libmachine: (ha-564251-m03) DBG | About to run SSH command:
	I0721 23:43:02.046498   23196 main.go:141] libmachine: (ha-564251-m03) DBG | exit 0
	I0721 23:43:02.170333   23196 main.go:141] libmachine: (ha-564251-m03) DBG | SSH cmd err, output: <nil>: 
	I0721 23:43:02.170581   23196 main.go:141] libmachine: (ha-564251-m03) KVM machine creation complete!
	I0721 23:43:02.170916   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:43:02.171391   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:02.171562   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:02.171782   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:43:02.171799   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:43:02.173068   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:43:02.173085   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:43:02.173090   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:43:02.173096   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.175538   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.175906   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.175939   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.176080   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.176251   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.176421   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.176546   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.176721   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.176899   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.176910   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:43:02.281807   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:43:02.281831   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:43:02.281842   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.284709   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.285089   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.285112   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.285352   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.285540   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.285676   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.285794   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.285952   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.286121   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.286135   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:43:02.390968   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:43:02.391036   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:43:02.391045   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:43:02.391052   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.391296   23196 buildroot.go:166] provisioning hostname "ha-564251-m03"
	I0721 23:43:02.391322   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.391526   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.394031   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.394382   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.394408   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.394499   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.394691   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.394842   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.394977   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.395125   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.395334   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.395352   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251-m03 && echo "ha-564251-m03" | sudo tee /etc/hostname
	I0721 23:43:02.513525   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251-m03
	
	I0721 23:43:02.513588   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.516196   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.516566   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.516590   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.516722   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.516910   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.517089   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.517216   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.517357   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.517582   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.517602   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:43:02.631105   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:43:02.631138   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:43:02.631165   23196 buildroot.go:174] setting up certificates
	I0721 23:43:02.631179   23196 provision.go:84] configureAuth start
	I0721 23:43:02.631188   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.631446   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:02.634128   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.634576   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.634624   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.634793   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.637233   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.637593   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.637619   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.637751   23196 provision.go:143] copyHostCerts
	I0721 23:43:02.637781   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:43:02.637810   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:43:02.637822   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:43:02.637892   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:43:02.637978   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:43:02.638014   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:43:02.638030   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:43:02.638069   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:43:02.638130   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:43:02.638150   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:43:02.638157   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:43:02.638195   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:43:02.638258   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251-m03 san=[127.0.0.1 192.168.39.89 ha-564251-m03 localhost minikube]
	I0721 23:43:02.735309   23196 provision.go:177] copyRemoteCerts
	I0721 23:43:02.735359   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:43:02.735384   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.737765   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.738103   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.738134   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.738285   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.738451   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.738633   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.738767   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:02.821678   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:43:02.821745   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:43:02.843500   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:43:02.843563   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:43:02.864390   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:43:02.864455   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:43:02.886139   23196 provision.go:87] duration metric: took 254.946457ms to configureAuth
	I0721 23:43:02.886166   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:43:02.886396   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:02.886460   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.889045   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.889432   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.889463   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.889618   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.889796   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.889949   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.890109   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.890242   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.890410   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.890425   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:43:03.138130   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:43:03.138156   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:43:03.138164   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetURL
	I0721 23:43:03.139494   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using libvirt version 6000000
	I0721 23:43:03.141768   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.142131   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.142157   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.142303   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:43:03.142319   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:43:03.142326   23196 client.go:171] duration metric: took 21.292314837s to LocalClient.Create
	I0721 23:43:03.142348   23196 start.go:167] duration metric: took 21.292379398s to libmachine.API.Create "ha-564251"
	I0721 23:43:03.142357   23196 start.go:293] postStartSetup for "ha-564251-m03" (driver="kvm2")
	I0721 23:43:03.142366   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:43:03.142387   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.142644   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:43:03.142673   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.144607   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.144929   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.144958   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.145078   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.145218   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.145369   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.145480   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.228172   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:43:03.231951   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:43:03.231987   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:43:03.232040   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:43:03.232104   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:43:03.232112   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:43:03.232188   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:43:03.241309   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:43:03.263190   23196 start.go:296] duration metric: took 120.821526ms for postStartSetup
	I0721 23:43:03.263233   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:43:03.263827   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:03.266290   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.266781   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.266811   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.267040   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:43:03.267243   23196 start.go:128] duration metric: took 21.438859784s to createHost
	I0721 23:43:03.267270   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.269462   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.269819   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.269834   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.270019   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.270207   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.270363   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.270525   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.270722   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:03.270917   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:03.270931   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:43:03.375117   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605383.350180133
	
	I0721 23:43:03.375162   23196 fix.go:216] guest clock: 1721605383.350180133
	I0721 23:43:03.375172   23196 fix.go:229] Guest: 2024-07-21 23:43:03.350180133 +0000 UTC Remote: 2024-07-21 23:43:03.267255284 +0000 UTC m=+142.753883431 (delta=82.924849ms)
	I0721 23:43:03.375192   23196 fix.go:200] guest clock delta is within tolerance: 82.924849ms
	I0721 23:43:03.375200   23196 start.go:83] releasing machines lock for "ha-564251-m03", held for 21.546916603s
	I0721 23:43:03.375231   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.375490   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:03.377846   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.378222   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.378250   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.380443   23196 out.go:177] * Found network options:
	I0721 23:43:03.381872   23196 out.go:177]   - NO_PROXY=192.168.39.91,192.168.39.202
	W0721 23:43:03.383034   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0721 23:43:03.383054   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:43:03.383066   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383661   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383857   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383949   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:43:03.383985   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	W0721 23:43:03.384056   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0721 23:43:03.384080   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:43:03.384143   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:43:03.384165   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.386580   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.386810   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.386982   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.387005   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.387216   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.387400   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.387432   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.387479   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.387602   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.387744   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.387754   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.387885   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.387917   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.388032   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.617764   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:43:03.623563   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:43:03.623630   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:43:03.637910   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:43:03.637932   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:43:03.637984   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:43:03.653039   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:43:03.664909   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:43:03.664961   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:43:03.677456   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:43:03.689956   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:43:03.803962   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:43:03.936639   23196 docker.go:233] disabling docker service ...
	I0721 23:43:03.936714   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:43:03.951884   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:43:03.963888   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:43:04.094568   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:43:04.215209   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:43:04.229166   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:43:04.246213   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:43:04.246280   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.256127   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:43:04.256189   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.265950   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.276981   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.288430   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:43:04.299786   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.309646   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.325631   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.335342   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:43:04.343950   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:43:04.344002   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:43:04.355378   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:43:04.364357   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:04.491098   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:43:04.619871   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:43:04.619952   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:43:04.624297   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:43:04.624357   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:43:04.627832   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:43:04.665590   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:43:04.665664   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:43:04.692460   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:43:04.720162   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:43:04.721498   23196 out.go:177]   - env NO_PROXY=192.168.39.91
	I0721 23:43:04.722768   23196 out.go:177]   - env NO_PROXY=192.168.39.91,192.168.39.202
	I0721 23:43:04.723848   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:04.726673   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:04.727088   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:04.727118   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:04.727384   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:43:04.731216   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:43:04.742584   23196 mustload.go:65] Loading cluster: ha-564251
	I0721 23:43:04.742825   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:04.743220   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:04.743284   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:04.758771   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0721 23:43:04.759271   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:04.759737   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:04.759762   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:04.760048   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:04.760267   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:43:04.762317   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:43:04.762687   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:04.762729   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:04.778848   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0721 23:43:04.779235   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:04.779685   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:04.779707   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:04.779993   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:04.780189   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:43:04.780318   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.89
	I0721 23:43:04.780329   23196 certs.go:194] generating shared ca certs ...
	I0721 23:43:04.780347   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:04.780458   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:43:04.780494   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:43:04.780503   23196 certs.go:256] generating profile certs ...
	I0721 23:43:04.780566   23196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:43:04.780588   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0
	I0721 23:43:04.780604   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.89 192.168.39.254]
	I0721 23:43:05.011110   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 ...
	I0721 23:43:05.011146   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0: {Name:mk0d14ced944e14d8abaa56474e12ed7f0f73217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:05.011332   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0 ...
	I0721 23:43:05.011347   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0: {Name:mk7d7654d81c42e493ce8909de430daf29543ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:05.011440   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:43:05.011607   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:43:05.011791   23196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:43:05.011810   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:43:05.011822   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:43:05.011832   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:43:05.011842   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:43:05.011852   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:43:05.011864   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:43:05.011874   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:43:05.011885   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:43:05.011927   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:43:05.011955   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:43:05.011964   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:43:05.011985   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:43:05.012005   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:43:05.012025   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:43:05.012058   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:43:05.012085   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.012099   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.012112   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.012143   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:43:05.014986   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:05.015468   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:43:05.015494   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:05.015661   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:43:05.015837   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:43:05.016017   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:43:05.016152   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:43:05.090966   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0721 23:43:05.095673   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0721 23:43:05.107966   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0721 23:43:05.111908   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0721 23:43:05.122311   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0721 23:43:05.125941   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0721 23:43:05.135113   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0721 23:43:05.138926   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0721 23:43:05.148119   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0721 23:43:05.151668   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0721 23:43:05.160580   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0721 23:43:05.163941   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0721 23:43:05.172840   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:43:05.198333   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:43:05.222050   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:43:05.243975   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:43:05.268300   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0721 23:43:05.290810   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:43:05.312478   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:43:05.334568   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:43:05.356078   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:43:05.377090   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:43:05.398048   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:43:05.418825   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0721 23:43:05.433738   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0721 23:43:05.450154   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0721 23:43:05.466683   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0721 23:43:05.481508   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0721 23:43:05.498633   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0721 23:43:05.513810   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0721 23:43:05.528967   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:43:05.534351   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:43:05.544208   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.548119   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.548161   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.553477   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:43:05.564641   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:43:05.575638   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.579720   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.579770   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.584920   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:43:05.594278   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:43:05.603788   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.607648   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.607686   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.613043   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:43:05.624031   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:43:05.627604   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:43:05.627656   23196 kubeadm.go:934] updating node {m03 192.168.39.89 8443 v1.30.3 crio true true} ...
	I0721 23:43:05.627739   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:43:05.627766   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:43:05.627802   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:43:05.643803   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:43:05.643866   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:43:05.643927   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:43:05.652073   23196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0721 23:43:05.652127   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0721 23:43:05.660945   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0721 23:43:05.660953   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0721 23:43:05.660964   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:43:05.660963   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0721 23:43:05.660978   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:43:05.660989   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:43:05.661011   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:43:05.661038   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:43:05.665118   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0721 23:43:05.665142   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0721 23:43:05.700161   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0721 23:43:05.700165   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:43:05.700209   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0721 23:43:05.700311   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:43:05.756662   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0721 23:43:05.756712   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0721 23:43:06.530873   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0721 23:43:06.539897   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0721 23:43:06.556272   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:43:06.572072   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:43:06.587303   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:43:06.590895   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:43:06.602268   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:06.711722   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:43:06.727567   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:43:06.728052   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:06.728104   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:06.744693   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0721 23:43:06.745092   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:06.746102   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:06.746131   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:06.746487   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:06.746748   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:43:06.746904   23196 start.go:317] joinCluster: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:43:06.747060   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0721 23:43:06.747082   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:43:06.750062   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:06.750557   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:43:06.750584   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:06.750734   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:43:06.750902   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:43:06.751027   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:43:06.751130   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:43:06.904912   23196 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:43:06.904964   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 135g62.u4ctzsuofj006i1y --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I0721 23:43:30.357894   23196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 135g62.u4ctzsuofj006i1y --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (23.45289729s)
	I0721 23:43:30.357936   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0721 23:43:30.872196   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251-m03 minikube.k8s.io/updated_at=2024_07_21T23_43_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=false
	I0721 23:43:31.000136   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-564251-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0721 23:43:31.111420   23196 start.go:319] duration metric: took 24.364514251s to joinCluster
	I0721 23:43:31.111497   23196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:43:31.111817   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:31.112658   23196 out.go:177] * Verifying Kubernetes components...
	I0721 23:43:31.114080   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:31.402850   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:43:31.424762   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:43:31.424966   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0721 23:43:31.425020   23196 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.91:8443
	I0721 23:43:31.425255   23196 node_ready.go:35] waiting up to 6m0s for node "ha-564251-m03" to be "Ready" ...
	I0721 23:43:31.425352   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:31.425362   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:31.425369   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:31.425374   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:31.429044   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:31.925877   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:31.925907   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:31.925920   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:31.925926   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:31.929258   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:32.426207   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:32.426226   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:32.426235   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:32.426239   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:32.429776   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:32.925833   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:32.925855   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:32.925866   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:32.925875   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:32.928831   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:33.426060   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:33.426078   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:33.426085   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:33.426092   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:33.429268   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:33.429808   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:33.926012   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:33.926032   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:33.926041   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:33.926046   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:33.929721   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:34.425828   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:34.425847   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:34.425854   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:34.425860   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:34.429237   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:34.926189   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:34.926209   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:34.926217   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:34.926223   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:34.929615   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.425475   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:35.425494   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:35.425502   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:35.425507   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:35.428744   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.925770   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:35.925791   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:35.925799   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:35.925803   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:35.929136   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.929975   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:36.425884   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:36.425902   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:36.425910   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:36.425915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:36.429398   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:36.926319   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:36.926341   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:36.926351   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:36.926356   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:36.930368   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:37.426492   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:37.426513   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:37.426525   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:37.426529   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:37.430591   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:37.925539   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:37.925560   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:37.925568   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:37.925572   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:37.928720   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:38.425635   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:38.425658   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:38.425666   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:38.425671   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:38.428524   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:38.429051   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:38.926217   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:38.926239   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:38.926247   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:38.926252   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:38.929862   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:39.425450   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:39.425474   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:39.425486   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:39.425492   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:39.428216   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:39.926482   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:39.926508   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:39.926519   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:39.926526   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:39.930056   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:40.425695   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:40.425713   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:40.425725   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:40.425729   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:40.431532   23196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0721 23:43:40.432222   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:40.925702   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:40.925721   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:40.925729   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:40.925732   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:40.928883   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:41.425892   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:41.425913   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:41.425921   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:41.425927   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:41.428966   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:41.925793   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:41.925815   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:41.925822   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:41.925825   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:41.928750   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:42.425643   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:42.425663   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:42.425670   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:42.425674   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:42.429127   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:42.926187   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:42.926210   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:42.926218   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:42.926222   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:42.929588   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:42.930141   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:43.426291   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:43.426312   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:43.426318   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:43.426324   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:43.429259   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:43.926114   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:43.926138   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:43.926146   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:43.926149   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:43.929325   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:44.425428   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:44.425447   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:44.425456   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:44.425460   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:44.428770   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:44.925544   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:44.925563   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:44.925568   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:44.925571   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:44.929039   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:45.425918   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:45.425936   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:45.425944   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:45.425948   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:45.428920   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:45.429608   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:45.925972   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:45.925997   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:45.926006   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:45.926009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:45.929425   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:46.425884   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:46.425903   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:46.425911   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:46.425931   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:46.429760   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:46.925827   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:46.925847   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:46.925854   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:46.925859   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:46.929370   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.425444   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:47.425462   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.425470   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.425474   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.428676   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.926474   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:47.926498   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.926508   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.926514   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.930150   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.930828   23196 node_ready.go:49] node "ha-564251-m03" has status "Ready":"True"
	I0721 23:43:47.930847   23196 node_ready.go:38] duration metric: took 16.50556977s for node "ha-564251-m03" to be "Ready" ...
	I0721 23:43:47.930855   23196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:43:47.930908   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:47.930916   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.930923   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.930926   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.939306   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:47.946025   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.946096   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bsbzk
	I0721 23:43:47.946105   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.946111   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.946116   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.949284   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.949886   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.949901   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.949908   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.949913   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.952737   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.953346   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.953362   23196 pod_ready.go:81] duration metric: took 7.314216ms for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.953370   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.953414   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f4lqn
	I0721 23:43:47.953421   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.953429   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.953433   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.956032   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.956555   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.956574   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.956581   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.956587   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.959261   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.959848   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.959861   23196 pod_ready.go:81] duration metric: took 6.485232ms for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.959868   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.959920   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251
	I0721 23:43:47.959929   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.959935   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.959938   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.962303   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.963048   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.963065   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.963074   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.963077   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.965396   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.965885   23196 pod_ready.go:92] pod "etcd-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.965898   23196 pod_ready.go:81] duration metric: took 6.02401ms for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.965904   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.965943   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m02
	I0721 23:43:47.965952   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.965958   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.965963   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.968325   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.968854   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:47.968867   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.968873   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.968878   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.971089   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.971519   23196 pod_ready.go:92] pod "etcd-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.971535   23196 pod_ready.go:81] duration metric: took 5.625442ms for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.971543   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.126929   23196 request.go:629] Waited for 155.327284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m03
	I0721 23:43:48.127015   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m03
	I0721 23:43:48.127025   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.127036   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.127047   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.131167   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:48.327206   23196 request.go:629] Waited for 195.358079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:48.327265   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:48.327273   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.327286   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.327295   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.331699   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:48.332308   23196 pod_ready.go:92] pod "etcd-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:48.332333   23196 pod_ready.go:81] duration metric: took 360.782776ms for pod "etcd-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.332358   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.526855   23196 request.go:629] Waited for 194.432062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:43:48.526936   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:43:48.526945   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.526955   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.526964   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.530671   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:48.726624   23196 request.go:629] Waited for 195.327692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:48.726683   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:48.726690   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.726700   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.726705   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.730171   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:48.730798   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:48.730817   23196 pod_ready.go:81] duration metric: took 398.451431ms for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.730825   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.926691   23196 request.go:629] Waited for 195.796759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:43:48.926769   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:43:48.926774   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.926787   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.926795   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.930198   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.127318   23196 request.go:629] Waited for 196.366628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:49.127379   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:49.127384   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.127391   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.127394   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.130655   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.131201   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.131219   23196 pod_ready.go:81] duration metric: took 400.386742ms for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.131228   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.327346   23196 request.go:629] Waited for 196.060541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m03
	I0721 23:43:49.327415   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m03
	I0721 23:43:49.327421   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.327428   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.327433   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.330426   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:49.526550   23196 request.go:629] Waited for 195.274214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:49.526614   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:49.526621   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.526632   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.526637   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.529309   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:49.529956   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.529973   23196 pod_ready.go:81] duration metric: took 398.73979ms for pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.529983   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.727067   23196 request.go:629] Waited for 197.025666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:43:49.727144   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:43:49.727151   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.727161   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.727170   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.731068   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.926842   23196 request.go:629] Waited for 194.942395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:49.926894   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:49.926905   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.926914   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.926921   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.930093   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.930707   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.930727   23196 pod_ready.go:81] duration metric: took 400.737593ms for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.930736   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.126755   23196 request.go:629] Waited for 195.962238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:43:50.126820   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:43:50.126826   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.126846   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.126851   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.130343   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:50.327463   23196 request.go:629] Waited for 196.372309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:50.327509   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:50.327514   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.327521   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.327532   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.330198   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:50.330812   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:50.330833   23196 pod_ready.go:81] duration metric: took 400.088718ms for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.330845   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.526917   23196 request.go:629] Waited for 196.002846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m03
	I0721 23:43:50.526983   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m03
	I0721 23:43:50.526991   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.527004   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.527009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.535161   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:50.727367   23196 request.go:629] Waited for 191.442236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:50.727434   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:50.727441   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.727450   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.727455   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.731714   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:50.732519   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:50.732536   23196 pod_ready.go:81] duration metric: took 401.68329ms for pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.732546   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2xlks" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.927160   23196 request.go:629] Waited for 194.546992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xlks
	I0721 23:43:50.927253   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xlks
	I0721 23:43:50.927265   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.927275   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.927280   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.931547   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:51.126846   23196 request.go:629] Waited for 194.351495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:51.126923   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:51.126930   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.126940   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.126951   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.131236   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:51.132019   23196 pod_ready.go:92] pod "kube-proxy-2xlks" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.132043   23196 pod_ready.go:81] duration metric: took 399.49068ms for pod "kube-proxy-2xlks" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.132053   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.326504   23196 request.go:629] Waited for 194.390902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:43:51.326554   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:43:51.326559   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.326566   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.326569   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.330347   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.526893   23196 request.go:629] Waited for 195.395181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:51.526957   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:51.526964   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.526975   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.526980   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.530104   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.530666   23196 pod_ready.go:92] pod "kube-proxy-8c6vn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.530690   23196 pod_ready.go:81] duration metric: took 398.627758ms for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.530699   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.726585   23196 request.go:629] Waited for 195.814641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:43:51.726664   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:43:51.726670   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.726678   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.726683   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.729647   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:51.927272   23196 request.go:629] Waited for 196.827193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:51.927327   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:51.927331   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.927338   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.927342   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.930672   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.931305   23196 pod_ready.go:92] pod "kube-proxy-srpl8" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.931324   23196 pod_ready.go:81] duration metric: took 400.618664ms for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.931334   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.127505   23196 request.go:629] Waited for 196.102945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:43:52.127562   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:43:52.127569   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.127579   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.127584   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.130733   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.327022   23196 request.go:629] Waited for 195.369501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:52.327079   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:52.327084   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.327091   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.327094   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.329923   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:52.330532   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:52.330548   23196 pod_ready.go:81] duration metric: took 399.206943ms for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.330556   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.526589   23196 request.go:629] Waited for 195.962537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:43:52.526687   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:43:52.526696   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.526704   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.526711   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.529872   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.727067   23196 request.go:629] Waited for 196.386081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:52.727139   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:52.727144   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.727152   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.727159   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.730488   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.731218   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:52.731240   23196 pod_ready.go:81] duration metric: took 400.676697ms for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.731257   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.927477   23196 request.go:629] Waited for 196.145575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m03
	I0721 23:43:52.927558   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m03
	I0721 23:43:52.927564   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.927579   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.927583   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.930775   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.126671   23196 request.go:629] Waited for 195.310681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:53.126719   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:53.126731   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.126748   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.126755   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.129792   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.130351   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:53.130369   23196 pod_ready.go:81] duration metric: took 399.104538ms for pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:53.130379   23196 pod_ready.go:38] duration metric: took 5.19951489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:43:53.130393   23196 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:43:53.130440   23196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:43:53.146643   23196 api_server.go:72] duration metric: took 22.035111538s to wait for apiserver process to appear ...
	I0721 23:43:53.146666   23196 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:43:53.146687   23196 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0721 23:43:53.152312   23196 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0721 23:43:53.152384   23196 round_trippers.go:463] GET https://192.168.39.91:8443/version
	I0721 23:43:53.152395   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.152405   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.152416   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.153278   23196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0721 23:43:53.153329   23196 api_server.go:141] control plane version: v1.30.3
	I0721 23:43:53.153342   23196 api_server.go:131] duration metric: took 6.669849ms to wait for apiserver health ...
	I0721 23:43:53.153351   23196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:43:53.326762   23196 request.go:629] Waited for 173.343527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.326849   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.326862   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.326874   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.326886   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.334330   23196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0721 23:43:53.340512   23196 system_pods.go:59] 24 kube-system pods found
	I0721 23:43:53.340538   23196 system_pods.go:61] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:43:53.340543   23196 system_pods.go:61] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:43:53.340547   23196 system_pods.go:61] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:43:53.340550   23196 system_pods.go:61] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:43:53.340554   23196 system_pods.go:61] "etcd-ha-564251-m03" [54c2633e-32df-4367-affb-a723188f5249] Running
	I0721 23:43:53.340557   23196 system_pods.go:61] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:43:53.340560   23196 system_pods.go:61] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:43:53.340563   23196 system_pods.go:61] "kindnet-s2t8k" [96cd07e3-b249-4f1b-a6c0-6e2bc2791df1] Running
	I0721 23:43:53.340566   23196 system_pods.go:61] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:43:53.340569   23196 system_pods.go:61] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:43:53.340571   23196 system_pods.go:61] "kube-apiserver-ha-564251-m03" [ecb696ba-6d8b-43e2-a700-f4e60e8b6bfd] Running
	I0721 23:43:53.340575   23196 system_pods.go:61] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:43:53.340577   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:43:53.340580   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m03" [bb892047-2a7f-49ad-ae3b-d596e27123d4] Running
	I0721 23:43:53.340583   23196 system_pods.go:61] "kube-proxy-2xlks" [67ba351a-20c6-442f-bc11-d1363ee387f7] Running
	I0721 23:43:53.340586   23196 system_pods.go:61] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:43:53.340589   23196 system_pods.go:61] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:43:53.340592   23196 system_pods.go:61] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:43:53.340594   23196 system_pods.go:61] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:43:53.340597   23196 system_pods.go:61] "kube-scheduler-ha-564251-m03" [8242efc1-a265-4d55-aa13-b6ffc5fafabb] Running
	I0721 23:43:53.340600   23196 system_pods.go:61] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:43:53.340603   23196 system_pods.go:61] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:43:53.340606   23196 system_pods.go:61] "kube-vip-ha-564251-m03" [acec0505-d562-4e84-8d2c-355d77f73d71] Running
	I0721 23:43:53.340609   23196 system_pods.go:61] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:43:53.340614   23196 system_pods.go:74] duration metric: took 187.254705ms to wait for pod list to return data ...
	I0721 23:43:53.340624   23196 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:43:53.527024   23196 request.go:629] Waited for 186.337733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:43:53.527083   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:43:53.527091   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.527101   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.527113   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.530370   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.530497   23196 default_sa.go:45] found service account: "default"
	I0721 23:43:53.530514   23196 default_sa.go:55] duration metric: took 189.883296ms for default service account to be created ...
	I0721 23:43:53.530525   23196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:43:53.726973   23196 request.go:629] Waited for 196.366837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.727061   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.727073   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.727084   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.727095   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.735804   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:53.742063   23196 system_pods.go:86] 24 kube-system pods found
	I0721 23:43:53.742087   23196 system_pods.go:89] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:43:53.742092   23196 system_pods.go:89] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:43:53.742096   23196 system_pods.go:89] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:43:53.742100   23196 system_pods.go:89] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:43:53.742104   23196 system_pods.go:89] "etcd-ha-564251-m03" [54c2633e-32df-4367-affb-a723188f5249] Running
	I0721 23:43:53.742109   23196 system_pods.go:89] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:43:53.742115   23196 system_pods.go:89] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:43:53.742121   23196 system_pods.go:89] "kindnet-s2t8k" [96cd07e3-b249-4f1b-a6c0-6e2bc2791df1] Running
	I0721 23:43:53.742129   23196 system_pods.go:89] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:43:53.742139   23196 system_pods.go:89] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:43:53.742144   23196 system_pods.go:89] "kube-apiserver-ha-564251-m03" [ecb696ba-6d8b-43e2-a700-f4e60e8b6bfd] Running
	I0721 23:43:53.742150   23196 system_pods.go:89] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:43:53.742159   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:43:53.742166   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m03" [bb892047-2a7f-49ad-ae3b-d596e27123d4] Running
	I0721 23:43:53.742171   23196 system_pods.go:89] "kube-proxy-2xlks" [67ba351a-20c6-442f-bc11-d1363ee387f7] Running
	I0721 23:43:53.742177   23196 system_pods.go:89] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:43:53.742181   23196 system_pods.go:89] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:43:53.742187   23196 system_pods.go:89] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:43:53.742191   23196 system_pods.go:89] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:43:53.742197   23196 system_pods.go:89] "kube-scheduler-ha-564251-m03" [8242efc1-a265-4d55-aa13-b6ffc5fafabb] Running
	I0721 23:43:53.742201   23196 system_pods.go:89] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:43:53.742206   23196 system_pods.go:89] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:43:53.742210   23196 system_pods.go:89] "kube-vip-ha-564251-m03" [acec0505-d562-4e84-8d2c-355d77f73d71] Running
	I0721 23:43:53.742216   23196 system_pods.go:89] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:43:53.742225   23196 system_pods.go:126] duration metric: took 211.693904ms to wait for k8s-apps to be running ...
	I0721 23:43:53.742237   23196 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:43:53.742283   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:43:53.757770   23196 system_svc.go:56] duration metric: took 15.524949ms WaitForService to wait for kubelet
	I0721 23:43:53.757799   23196 kubeadm.go:582] duration metric: took 22.64627139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:43:53.757815   23196 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:43:53.926970   23196 request.go:629] Waited for 169.07378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes
	I0721 23:43:53.927030   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes
	I0721 23:43:53.927038   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.927049   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.927056   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.931456   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:53.932551   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932572   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932584   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932587   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932590   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932593   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932598   23196 node_conditions.go:105] duration metric: took 174.777231ms to run NodePressure ...
	I0721 23:43:53.932608   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:43:53.932626   23196 start.go:255] writing updated cluster config ...
	I0721 23:43:53.932865   23196 ssh_runner.go:195] Run: rm -f paused
	I0721 23:43:53.984198   23196 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0721 23:43:53.985909   23196 out.go:177] * Done! kubectl is now configured to use "ha-564251" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.655447710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605651655418433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=557f652b-8bd1-43da-ad9c-c251bc57bfe2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.656144085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea576d38-74d2-4f2e-8e9c-a1642a9fc929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.656220621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea576d38-74d2-4f2e-8e9c-a1642a9fc929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.656519426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea576d38-74d2-4f2e-8e9c-a1642a9fc929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.695873439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=562151f3-0169-43f1-a396-ba7788bd780a name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.695944085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=562151f3-0169-43f1-a396-ba7788bd780a name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.697144815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ac0d0d7-070c-4526-a2cc-ba36580b1cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.697615245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605651697589543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ac0d0d7-070c-4526-a2cc-ba36580b1cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.698126703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d71513f7-fdbe-48a7-bf5e-a742ab274295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.698181194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d71513f7-fdbe-48a7-bf5e-a742ab274295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.698421480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d71513f7-fdbe-48a7-bf5e-a742ab274295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.733278305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a6435c0-16c9-440c-93c8-4a8f4a0d9c9c name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.733349111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a6435c0-16c9-440c-93c8-4a8f4a0d9c9c name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.734911436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b2a647a-d8b5-4940-9e57-e4cc366d457b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.735363302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605651735343171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b2a647a-d8b5-4940-9e57-e4cc366d457b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.735996458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b280387-debf-4408-aedb-1169cb03ea99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.736053585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b280387-debf-4408-aedb-1169cb03ea99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.736275820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b280387-debf-4408-aedb-1169cb03ea99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.774522143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2f23917-1391-4682-a309-a97ef3a0421d name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.774629348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2f23917-1391-4682-a309-a97ef3a0421d name=/runtime.v1.RuntimeService/Version
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.775858143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27bf806a-c796-40fa-bf18-f3fa1306f598 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.776298514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605651776276881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27bf806a-c796-40fa-bf18-f3fa1306f598 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.776850316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66760d01-fd85-445e-ad12-c9fccba7c437 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.776904202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66760d01-fd85-445e-ad12-c9fccba7c437 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:47:31 ha-564251 crio[681]: time="2024-07-21 23:47:31.777131244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66760d01-fd85-445e-ad12-c9fccba7c437 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3769ca1c0d189       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4399dac80b572       busybox-fc5497c4f-tvjh7
	fd88a6f6b66dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   60549b9fc09ba       coredns-7db6d8ff4d-bsbzk
	db39c7c7e0f7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   2cd28c9ca5ac8       storage-provisioner
	d708ea287a4e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3cf5796c9ffab       coredns-7db6d8ff4d-f4lqn
	b2afbf6c4dfa0       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    5 minutes ago       Running             kindnet-cni               0                   8c7a9ed52b5b4       kindnet-jz5md
	777c36438bf0f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   997932c064fbe       kube-proxy-srpl8
	bd2d1274e4986       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   5d8c01689d032       kube-vip-ha-564251
	22bd5cac142d6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   2d4165e9b2df2       kube-scheduler-ha-564251
	fb0b898c77f8d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   4c669b6cce38b       kube-apiserver-ha-564251
	17153bc2e8cea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   08f4ba91fc6ac       kube-controller-manager-ha-564251
	9863a1f5cf334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   bc6861a50f8f6       etcd-ha-564251
	
	
	==> coredns [d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091] <==
	[INFO] 10.244.1.2:43405 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00026763s
	[INFO] 10.244.2.2:54021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001690935s
	[INFO] 10.244.2.2:51685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084922s
	[INFO] 10.244.2.2:33159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100397s
	[INFO] 10.244.2.2:33164 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122928s
	[INFO] 10.244.2.2:43819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076913s
	[INFO] 10.244.2.2:59599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063404s
	[INFO] 10.244.0.4:53831 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001206293s
	[INFO] 10.244.0.4:57062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100467s
	[INFO] 10.244.1.2:34188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014651s
	[INFO] 10.244.1.2:41501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011577s
	[INFO] 10.244.1.2:34022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084216s
	[INFO] 10.244.2.2:36668 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118928s
	[INFO] 10.244.0.4:60553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129219s
	[INFO] 10.244.0.4:34229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158514s
	[INFO] 10.244.0.4:35099 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013345s
	[INFO] 10.244.1.2:60128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204062s
	[INFO] 10.244.1.2:51220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169537s
	[INFO] 10.244.1.2:50118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000261213s
	[INFO] 10.244.2.2:42616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012241s
	[INFO] 10.244.2.2:51984 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223089s
	[INFO] 10.244.2.2:60866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100348s
	[INFO] 10.244.0.4:38494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093863s
	[INFO] 10.244.0.4:56964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080856s
	[INFO] 10.244.0.4:37413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172185s
	
	
	==> coredns [fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50666 - 38198 "HINFO IN 5523897286626880771.7232038906359800539. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010268589s
	[INFO] 10.244.1.2:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000556256s
	[INFO] 10.244.1.2:48153 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.011402035s
	[INFO] 10.244.2.2:35506 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000434307s
	[INFO] 10.244.2.2:50811 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001393702s
	[INFO] 10.244.1.2:47400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171001s
	[INFO] 10.244.1.2:51399 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162839s
	[INFO] 10.244.2.2:46920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139973s
	[INFO] 10.244.2.2:45334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001092856s
	[INFO] 10.244.0.4:53396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109772s
	[INFO] 10.244.0.4:54634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652249s
	[INFO] 10.244.0.4:45490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147442s
	[INFO] 10.244.0.4:46915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090743s
	[INFO] 10.244.0.4:60906 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127948s
	[INFO] 10.244.0.4:36593 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118548s
	[INFO] 10.244.1.2:59477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105785s
	[INFO] 10.244.2.2:48044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138738s
	[INFO] 10.244.2.2:48209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093024s
	[INFO] 10.244.2.2:54967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089783s
	[INFO] 10.244.0.4:47425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088831s
	[INFO] 10.244.1.2:59455 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131678s
	[INFO] 10.244.2.2:60606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089108s
	[INFO] 10.244.0.4:46173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097876s
	
	
	==> describe nodes <==
	Name:               ha-564251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:47:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-564251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83877339e2d74557b5e6d75fd0a30c5b
	  System UUID:                83877339-e2d7-4557-b5e6-d75fd0a30c5b
	  Boot ID:                    4d4acbc6-fdf1-4a14-b622-8bad377224dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvjh7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-bsbzk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 coredns-7db6d8ff4d-f4lqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 etcd-ha-564251                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m12s
	  kube-system                 kindnet-jz5md                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m2s
	  kube-system                 kube-apiserver-ha-564251             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-564251    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-srpl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-scheduler-ha-564251             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-564251                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m12s                  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s                  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s                  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal  NodeReady                5m46s                  kubelet          Node ha-564251 status is now: NodeReady
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	
	
	Name:               ha-564251-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:42:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:45:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-564251-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8db54debc3f459a84145497caff8bc1
	  System UUID:                e8db54de-bc3f-459a-8414-5497caff8bc1
	  Boot ID:                    e9c8db11-8f9d-4e77-bb70-f3aef06af356
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2jrmb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-564251-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-99b2q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-564251-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-ha-564251-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-8c6vn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-564251-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-vip-ha-564251-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-564251-m02 status is now: NodeNotReady
	
	
	Name:               ha-564251-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_43_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:43:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:47:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-564251-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 edaed2175ae2489883b557af269e9263
	  System UUID:                edaed217-5ae2-4898-83b5-57af269e9263
	  Boot ID:                    d9bd97ea-d279-48c4-b4cf-847e1fb7c8fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s2cqd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-564251-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m3s
	  kube-system                 kindnet-s2t8k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m5s
	  kube-system                 kube-apiserver-ha-564251-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-ha-564251-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-2xlks                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-ha-564251-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-vip-ha-564251-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-564251-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal  RegisteredNode           3m47s                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	
	
	Name:               ha-564251-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_44_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:44:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    ha-564251-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf784ac43fb240a1b428a7ebf8ca34bc
	  System UUID:                cf784ac4-3fb2-40a1-b428-a7ebf8ca34bc
	  Boot ID:                    344142ed-1d06-4520-a624-7c3d556f224c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mfjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-lv5zw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-564251-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul21 23:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.420656] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.747762] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.566670] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul21 23:41] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.053909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055459] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.166215] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.145388] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268301] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.918090] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.419554] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.062251] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.216979] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075586] kauditd_printk_skb: 79 callbacks suppressed
	[ +11.003747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.099946] kauditd_printk_skb: 34 callbacks suppressed
	[Jul21 23:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7] <==
	{"level":"warn","ts":"2024-07-21T23:47:32.003416Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d1cad45d5a401f4","rtt":"898.476µs","error":"dial tcp 192.168.39.202:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-21T23:47:32.003498Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d1cad45d5a401f4","rtt":"9.710411ms","error":"dial tcp 192.168.39.202:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-21T23:47:32.063032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.076396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.084114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.086631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.091999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.096211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.098913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.106425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.113265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.119301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.12344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.126693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.137389Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.143185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.148207Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"d1cad45d5a401f4","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-21T23:47:32.148295Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d1cad45d5a401f4","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-21T23:47:32.149115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.15295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.155994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.169786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.185803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.188942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:47:32.198943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:47:32 up 6 min,  0 users,  load average: 0.22, 0.28, 0.15
	Linux ha-564251 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5] <==
	I0721 23:46:56.155509       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:47:06.155804       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:47:06.155934       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:47:06.156105       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:47:06.156132       1 main.go:299] handling current node
	I0721 23:47:06.156154       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:47:06.156170       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:47:06.156234       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:47:06.156253       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:47:16.150995       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:47:16.151034       1 main.go:299] handling current node
	I0721 23:47:16.151048       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:47:16.151053       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:47:16.151195       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:47:16.151215       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:47:16.151294       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:47:16.151321       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:47:26.154399       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:47:26.154493       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:47:26.154738       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:47:26.154783       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:47:26.154863       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:47:26.154883       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:47:26.154940       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:47:26.154960       1 main.go:299] handling current node
	
	
	==> kube-apiserver [fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed] <==
	I0721 23:41:15.479862       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0721 23:41:15.486032       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.91]
	I0721 23:41:15.487091       1 controller.go:615] quota admission added evaluator for: endpoints
	I0721 23:41:15.491154       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0721 23:41:15.828239       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0721 23:41:20.136856       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0721 23:41:20.154190       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0721 23:41:20.167388       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0721 23:41:30.134929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0721 23:41:30.239978       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0721 23:43:59.080244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51542: use of closed network connection
	E0721 23:43:59.272470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51554: use of closed network connection
	E0721 23:43:59.450298       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51582: use of closed network connection
	E0721 23:43:59.626286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51610: use of closed network connection
	E0721 23:43:59.804539       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51638: use of closed network connection
	E0721 23:43:59.995510       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51658: use of closed network connection
	E0721 23:44:00.179899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51682: use of closed network connection
	E0721 23:44:00.350828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51700: use of closed network connection
	E0721 23:44:00.532002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51718: use of closed network connection
	E0721 23:44:00.822858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51748: use of closed network connection
	E0721 23:44:01.005207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51764: use of closed network connection
	E0721 23:44:01.174041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51782: use of closed network connection
	E0721 23:44:01.339015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51802: use of closed network connection
	E0721 23:44:01.520023       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51812: use of closed network connection
	E0721 23:44:01.685538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51822: use of closed network connection
	
	
	==> kube-controller-manager [17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3] <==
	E0721 23:43:26.986885       1 certificate_controller.go:146] Sync csr-2pqtc failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2pqtc": the object has been modified; please apply your changes to the latest version and try again
	I0721 23:43:27.098942       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-564251-m03\" does not exist"
	I0721 23:43:27.115955       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-564251-m03" podCIDRs=["10.244.2.0/24"]
	I0721 23:43:29.542084       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m03"
	I0721 23:43:54.881326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.614392ms"
	I0721 23:43:54.914728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.253805ms"
	I0721 23:43:55.149752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.877996ms"
	I0721 23:43:55.362258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.313649ms"
	I0721 23:43:55.376751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.404727ms"
	I0721 23:43:55.377159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.648µs"
	I0721 23:43:56.791808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.097µs"
	I0721 23:43:57.035226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.56µs"
	I0721 23:43:58.288611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.371131ms"
	E0721 23:43:58.288939       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0721 23:43:58.289216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.28µs"
	I0721 23:43:58.294312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.946µs"
	I0721 23:43:58.663010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.119717ms"
	I0721 23:43:58.663248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.357µs"
	I0721 23:44:31.700464       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-564251-m04\" does not exist"
	I0721 23:44:31.759239       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-564251-m04" podCIDRs=["10.244.3.0/24"]
	I0721 23:44:34.568264       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m04"
	I0721 23:44:51.855822       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-564251-m04"
	I0721 23:45:50.694131       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-564251-m04"
	I0721 23:45:50.735804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.833733ms"
	I0721 23:45:50.735943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.502µs"
	
	
	==> kube-proxy [777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5] <==
	I0721 23:41:31.760987       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:41:31.776156       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.91"]
	I0721 23:41:31.806920       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:41:31.806992       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:41:31.807008       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:41:31.809771       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:41:31.810322       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:41:31.810347       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:41:31.811902       1 config.go:192] "Starting service config controller"
	I0721 23:41:31.812086       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:41:31.812738       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:41:31.812771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:41:31.813817       1 config.go:319] "Starting node config controller"
	I0721 23:41:31.813839       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:41:31.912496       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:41:31.913157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:41:31.913966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624] <==
	E0721 23:43:27.174790       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t48tm\": pod kindnet-t48tm is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-t48tm" node="ha-564251-m03"
	E0721 23:43:27.174853       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2xlks\": pod kube-proxy-2xlks is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2xlks" node="ha-564251-m03"
	E0721 23:43:27.181792       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 67ba351a-20c6-442f-bc11-d1363ee387f7(kube-system/kube-proxy-2xlks) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2xlks"
	E0721 23:43:27.181860       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2xlks\": pod kube-proxy-2xlks is already assigned to node \"ha-564251-m03\"" pod="kube-system/kube-proxy-2xlks"
	I0721 23:43:27.181927       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2xlks" node="ha-564251-m03"
	E0721 23:43:27.181736       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aba6c570-6264-44fd-8775-e6d340bebd1d(kube-system/kindnet-t48tm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t48tm"
	E0721 23:43:27.184253       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t48tm\": pod kindnet-t48tm is already assigned to node \"ha-564251-m03\"" pod="kube-system/kindnet-t48tm"
	I0721 23:43:27.186380       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t48tm" node="ha-564251-m03"
	E0721 23:43:27.255933       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s2t8k\": pod kindnet-s2t8k is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-s2t8k" node="ha-564251-m03"
	E0721 23:43:27.255987       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 96cd07e3-b249-4f1b-a6c0-6e2bc2791df1(kube-system/kindnet-s2t8k) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-s2t8k"
	E0721 23:43:27.256006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s2t8k\": pod kindnet-s2t8k is already assigned to node \"ha-564251-m03\"" pod="kube-system/kindnet-s2t8k"
	I0721 23:43:27.256025       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s2t8k" node="ha-564251-m03"
	E0721 23:43:27.256220       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hks9x\": pod kube-proxy-hks9x is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hks9x" node="ha-564251-m03"
	E0721 23:43:27.256303       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 39d8a046-3214-49a6-9e1e-044e7ef50834(kube-system/kube-proxy-hks9x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hks9x"
	E0721 23:43:27.256392       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hks9x\": pod kube-proxy-hks9x is already assigned to node \"ha-564251-m03\"" pod="kube-system/kube-proxy-hks9x"
	I0721 23:43:27.258116       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hks9x" node="ha-564251-m03"
	E0721 23:43:55.105186       1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-s4brh is already present in the active queue" pod="default/busybox-fc5497c4f-s4brh"
	E0721 23:44:31.772650       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lv5zw\": pod kube-proxy-lv5zw is already assigned to node \"ha-564251-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lv5zw" node="ha-564251-m04"
	E0721 23:44:31.773002       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e18641cd-1554-44c4-8fe3-e0a8903f9a46(kube-system/kube-proxy-lv5zw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lv5zw"
	E0721 23:44:31.773145       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lv5zw\": pod kube-proxy-lv5zw is already assigned to node \"ha-564251-m04\"" pod="kube-system/kube-proxy-lv5zw"
	I0721 23:44:31.773430       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lv5zw" node="ha-564251-m04"
	E0721 23:44:31.879012       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lg2lc\": pod kindnet-lg2lc is already assigned to node \"ha-564251-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lg2lc" node="ha-564251-m04"
	E0721 23:44:31.879975       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 84debccc-791a-4de4-b195-15eb22ba5a1c(kube-system/kindnet-lg2lc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lg2lc"
	E0721 23:44:31.880277       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lg2lc\": pod kindnet-lg2lc is already assigned to node \"ha-564251-m04\"" pod="kube-system/kindnet-lg2lc"
	I0721 23:44:31.880360       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lg2lc" node="ha-564251-m04"
	
	
	==> kubelet <==
	Jul 21 23:43:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:43:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:43:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:43:54 ha-564251 kubelet[1363]: I0721 23:43:54.857874    1363 topology_manager.go:215] "Topology Admit Handler" podUID="dab5aa04-3324-424b-9a21-ad06a8974d43" podNamespace="default" podName="busybox-fc5497c4f-tvjh7"
	Jul 21 23:43:54 ha-564251 kubelet[1363]: I0721 23:43:54.883134    1363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6j2\" (UniqueName: \"kubernetes.io/projected/dab5aa04-3324-424b-9a21-ad06a8974d43-kube-api-access-kx6j2\") pod \"busybox-fc5497c4f-tvjh7\" (UID: \"dab5aa04-3324-424b-9a21-ad06a8974d43\") " pod="default/busybox-fc5497c4f-tvjh7"
	Jul 21 23:44:20 ha-564251 kubelet[1363]: E0721 23:44:20.022412    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:44:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:45:20 ha-564251 kubelet[1363]: E0721 23:45:20.022211    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:45:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:46:20 ha-564251 kubelet[1363]: E0721 23:46:20.023977    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:46:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:47:20 ha-564251 kubelet[1363]: E0721 23:47:20.021242    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:47:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-564251 -n ha-564251
helpers_test.go:261: (dbg) Run:  kubectl --context ha-564251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
E0721 23:47:39.016880   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (3.205464615s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:47:36.699106   27976 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:47:36.699213   27976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:36.699221   27976 out.go:304] Setting ErrFile to fd 2...
	I0721 23:47:36.699225   27976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:36.699423   27976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:47:36.699608   27976 out.go:298] Setting JSON to false
	I0721 23:47:36.699634   27976 mustload.go:65] Loading cluster: ha-564251
	I0721 23:47:36.699687   27976 notify.go:220] Checking for updates...
	I0721 23:47:36.700005   27976 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:47:36.700023   27976 status.go:255] checking status of ha-564251 ...
	I0721 23:47:36.700392   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.700445   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.719439   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0721 23:47:36.719797   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.720476   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.720513   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.720894   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.721100   27976 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:47:36.722669   27976 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:47:36.722687   27976 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:36.722945   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.722980   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.738040   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0721 23:47:36.738433   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.738954   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.738976   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.739305   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.739479   27976 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:47:36.742257   27976 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:36.742683   27976 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:36.742708   27976 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:36.742883   27976 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:36.743214   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.743253   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.758401   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0721 23:47:36.758923   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.759391   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.759413   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.759731   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.759936   27976 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:47:36.760179   27976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:36.760209   27976 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:47:36.762434   27976 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:36.762808   27976 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:36.762840   27976 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:36.762926   27976 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:47:36.763084   27976 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:47:36.763208   27976 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:47:36.763357   27976 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:47:36.838489   27976 ssh_runner.go:195] Run: systemctl --version
	I0721 23:47:36.844815   27976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:36.860002   27976 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:36.860027   27976 api_server.go:166] Checking apiserver status ...
	I0721 23:47:36.860065   27976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:36.878779   27976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:47:36.888902   27976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:36.888965   27976 ssh_runner.go:195] Run: ls
	I0721 23:47:36.893469   27976 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:36.899843   27976 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:36.899872   27976 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:47:36.899885   27976 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:36.899905   27976 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:47:36.900318   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.900364   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.915184   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0721 23:47:36.915579   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.916013   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.916033   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.916388   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.916575   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:47:36.918232   27976 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:47:36.918244   27976 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:36.918519   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.918550   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.933857   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0721 23:47:36.934222   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.934667   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.934692   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.935023   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.935196   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:47:36.938366   27976 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:36.938835   27976 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:36.938859   27976 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:36.938980   27976 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:36.939271   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:36.939308   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:36.953660   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0721 23:47:36.954025   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:36.954530   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:36.954546   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:36.954852   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:36.955029   27976 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:47:36.955204   27976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:36.955225   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:47:36.957565   27976 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:36.957966   27976 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:36.957992   27976 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:36.958124   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:47:36.958255   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:47:36.958410   27976 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:47:36.958543   27976 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:47:39.518861   27976 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:39.518953   27976 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:47:39.518971   27976 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:39.518979   27976 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:47:39.518998   27976 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:39.519005   27976 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:47:39.519301   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.519344   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.534788   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0721 23:47:39.535205   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.535771   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.535807   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.536153   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.536386   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:47:39.538107   27976 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:47:39.538125   27976 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:39.538436   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.538475   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.552876   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0721 23:47:39.553289   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.553669   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.553689   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.553970   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.554134   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:47:39.556533   27976 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:39.556850   27976 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:39.556882   27976 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:39.556982   27976 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:39.557368   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.557409   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.572655   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0721 23:47:39.573035   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.573477   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.573496   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.573760   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.573943   27976 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:47:39.574121   27976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:39.574147   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:47:39.576745   27976 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:39.577195   27976 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:39.577218   27976 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:39.577387   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:47:39.577546   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:47:39.577710   27976 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:47:39.577866   27976 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:47:39.658353   27976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:39.679579   27976 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:39.679605   27976 api_server.go:166] Checking apiserver status ...
	I0721 23:47:39.679643   27976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:39.694409   27976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:47:39.704535   27976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:39.704595   27976 ssh_runner.go:195] Run: ls
	I0721 23:47:39.708804   27976 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:39.713046   27976 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:39.713066   27976 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:47:39.713073   27976 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:39.713089   27976 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:47:39.713362   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.713393   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.728086   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0721 23:47:39.728438   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.728850   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.728872   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.729159   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.729314   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:47:39.730868   27976 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:47:39.730894   27976 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:39.731258   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.731297   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.746198   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0721 23:47:39.746582   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.747066   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.747086   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.747356   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.747504   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:47:39.750088   27976 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:39.750470   27976 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:39.750502   27976 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:39.750673   27976 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:39.750998   27976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:39.751040   27976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:39.765527   27976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0721 23:47:39.765925   27976 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:39.766361   27976 main.go:141] libmachine: Using API Version  1
	I0721 23:47:39.766380   27976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:39.766685   27976 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:39.766888   27976 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:47:39.767062   27976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:39.767084   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:47:39.769399   27976 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:39.769760   27976 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:39.769788   27976 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:39.769918   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:47:39.770074   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:47:39.770219   27976 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:47:39.770359   27976 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:47:39.849751   27976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:39.863747   27976 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (5.035451984s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:47:41.004591   28076 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:47:41.004706   28076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:41.004715   28076 out.go:304] Setting ErrFile to fd 2...
	I0721 23:47:41.004721   28076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:41.005348   28076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:47:41.005693   28076 out.go:298] Setting JSON to false
	I0721 23:47:41.005740   28076 mustload.go:65] Loading cluster: ha-564251
	I0721 23:47:41.005843   28076 notify.go:220] Checking for updates...
	I0721 23:47:41.006464   28076 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:47:41.006487   28076 status.go:255] checking status of ha-564251 ...
	I0721 23:47:41.007000   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.007042   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.022620   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0721 23:47:41.022984   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.023553   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.023574   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.023922   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.024175   28076 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:47:41.025824   28076 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:47:41.025843   28076 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:41.026263   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.026306   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.042489   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0721 23:47:41.043020   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.043541   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.043570   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.043891   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.044094   28076 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:47:41.047435   28076 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:41.047905   28076 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:41.047979   28076 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:41.048118   28076 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:41.048472   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.048525   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.065164   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0721 23:47:41.065531   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.065953   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.065975   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.066259   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.066460   28076 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:47:41.066684   28076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:41.066706   28076 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:47:41.069555   28076 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:41.069992   28076 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:41.070024   28076 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:41.070145   28076 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:47:41.070303   28076 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:47:41.070441   28076 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:47:41.070672   28076 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:47:41.146899   28076 ssh_runner.go:195] Run: systemctl --version
	I0721 23:47:41.153466   28076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:41.169193   28076 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:41.169224   28076 api_server.go:166] Checking apiserver status ...
	I0721 23:47:41.169256   28076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:41.183199   28076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:47:41.193471   28076 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:41.193532   28076 ssh_runner.go:195] Run: ls
	I0721 23:47:41.197713   28076 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:41.204053   28076 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:41.204075   28076 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:47:41.204088   28076 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:41.204111   28076 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:47:41.204421   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.204458   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.219712   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0721 23:47:41.220161   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.220645   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.220667   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.220904   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.221064   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:47:41.222574   28076 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:47:41.222589   28076 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:41.222893   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.222925   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.237952   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37941
	I0721 23:47:41.238386   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.238881   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.238905   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.239243   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.239469   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:47:41.242285   28076 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:41.242745   28076 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:41.242772   28076 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:41.242905   28076 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:41.243212   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:41.243242   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:41.257931   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0721 23:47:41.258289   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:41.258756   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:41.258781   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:41.259072   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:41.259268   28076 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:47:41.259433   28076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:41.259457   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:47:41.262213   28076 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:41.262658   28076 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:41.262688   28076 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:41.262779   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:47:41.262948   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:47:41.263077   28076 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:47:41.263188   28076 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:47:42.590947   28076 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:42.591008   28076 retry.go:31] will retry after 280.042399ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:45.662897   28076 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:45.662995   28076 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:47:45.663015   28076 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:45.663022   28076 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:47:45.663052   28076 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:45.663063   28076 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:47:45.663384   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.663439   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.678240   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43049
	I0721 23:47:45.678704   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.679189   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.679209   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.679546   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.679754   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:47:45.681307   28076 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:47:45.681324   28076 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:45.681718   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.681759   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.696715   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0721 23:47:45.697087   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.697581   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.697603   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.697967   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.698145   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:47:45.700936   28076 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:45.701393   28076 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:45.701429   28076 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:45.701574   28076 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:45.701913   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.701948   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.716745   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41777
	I0721 23:47:45.717173   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.717625   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.717646   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.717940   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.718146   28076 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:47:45.718317   28076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:45.718338   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:47:45.720780   28076 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:45.721215   28076 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:45.721242   28076 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:45.721582   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:47:45.721755   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:47:45.721926   28076 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:47:45.722069   28076 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:47:45.802066   28076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:45.817530   28076 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:45.817562   28076 api_server.go:166] Checking apiserver status ...
	I0721 23:47:45.817591   28076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:45.830659   28076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:47:45.839465   28076 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:45.839510   28076 ssh_runner.go:195] Run: ls
	I0721 23:47:45.843389   28076 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:45.848283   28076 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:45.848304   28076 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:47:45.848314   28076 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:45.848331   28076 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:47:45.848623   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.848662   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.865218   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0721 23:47:45.865620   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.866119   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.866144   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.866464   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.866671   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:47:45.868218   28076 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:47:45.868233   28076 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:45.868616   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.868660   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.883116   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0721 23:47:45.883603   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.884069   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.884093   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.884382   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.884581   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:47:45.887185   28076 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:45.887605   28076 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:45.887631   28076 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:45.887712   28076 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:45.888086   28076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:45.888125   28076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:45.902982   28076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0721 23:47:45.903397   28076 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:45.903902   28076 main.go:141] libmachine: Using API Version  1
	I0721 23:47:45.903925   28076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:45.904219   28076 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:45.904459   28076 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:47:45.904662   28076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:45.904689   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:47:45.907479   28076 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:45.907896   28076 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:45.907934   28076 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:45.908057   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:47:45.908219   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:47:45.908404   28076 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:47:45.908550   28076 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:47:45.985763   28076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:45.999770   28076 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (4.591371674s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:47:47.888825   28182 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:47:47.889064   28182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:47.889074   28182 out.go:304] Setting ErrFile to fd 2...
	I0721 23:47:47.889080   28182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:47.889626   28182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:47:47.889891   28182 out.go:298] Setting JSON to false
	I0721 23:47:47.889922   28182 mustload.go:65] Loading cluster: ha-564251
	I0721 23:47:47.890356   28182 notify.go:220] Checking for updates...
	I0721 23:47:47.890874   28182 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:47:47.890898   28182 status.go:255] checking status of ha-564251 ...
	I0721 23:47:47.891318   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:47.891355   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:47.910947   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0721 23:47:47.911367   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:47.912022   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:47.912047   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:47.912488   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:47.912722   28182 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:47:47.914337   28182 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:47:47.914353   28182 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:47.914780   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:47.914834   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:47.930084   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0721 23:47:47.930509   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:47.931020   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:47.931043   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:47.931299   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:47.931481   28182 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:47:47.934194   28182 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:47.934593   28182 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:47.934649   28182 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:47.934787   28182 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:47.935092   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:47.935132   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:47.951309   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0721 23:47:47.951682   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:47.952070   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:47.952093   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:47.952368   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:47.952527   28182 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:47:47.952714   28182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:47.952733   28182 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:47:47.955341   28182 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:47.955786   28182 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:47.955810   28182 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:47.955970   28182 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:47:47.956127   28182 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:47:47.956271   28182 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:47:47.956411   28182 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:47:48.034309   28182 ssh_runner.go:195] Run: systemctl --version
	I0721 23:47:48.039960   28182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:48.057522   28182 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:48.057551   28182 api_server.go:166] Checking apiserver status ...
	I0721 23:47:48.057586   28182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:48.070470   28182 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:47:48.079682   28182 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:48.079733   28182 ssh_runner.go:195] Run: ls
	I0721 23:47:48.084011   28182 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:48.088041   28182 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:48.088060   28182 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:47:48.088069   28182 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:48.088088   28182 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:47:48.088424   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:48.088456   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:48.103241   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0721 23:47:48.103616   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:48.104071   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:48.104095   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:48.104410   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:48.104623   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:47:48.106131   28182 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:47:48.106146   28182 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:48.106434   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:48.106464   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:48.122295   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0721 23:47:48.122727   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:48.123267   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:48.123310   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:48.123628   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:48.123825   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:47:48.127302   28182 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:48.127722   28182 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:48.127747   28182 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:48.127892   28182 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:48.128172   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:48.128211   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:48.143039   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0721 23:47:48.143424   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:48.143881   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:48.143900   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:48.144214   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:48.144394   28182 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:47:48.144586   28182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:48.144604   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:47:48.146990   28182 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:48.147395   28182 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:48.147423   28182 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:48.147597   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:47:48.147739   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:47:48.147836   28182 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:47:48.147965   28182 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:47:48.734812   28182 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:48.734854   28182 retry.go:31] will retry after 310.518991ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:52.094907   28182 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:52.095002   28182 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:47:52.095023   28182 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:52.095034   28182 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:47:52.095059   28182 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:52.095070   28182 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:47:52.095506   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.095565   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.111059   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0721 23:47:52.111478   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.111930   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.111955   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.112257   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.112453   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:47:52.113934   28182 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:47:52.113964   28182 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:52.114326   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.114374   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.129791   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
	I0721 23:47:52.130205   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.130770   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.130790   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.131061   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.131237   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:47:52.133619   28182 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:52.134053   28182 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:52.134085   28182 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:52.134229   28182 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:52.134515   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.134549   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.149767   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0721 23:47:52.150154   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.150657   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.150685   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.151034   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.151230   28182 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:47:52.151491   28182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:52.151511   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:47:52.154322   28182 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:52.154760   28182 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:52.154786   28182 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:52.154968   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:47:52.155176   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:47:52.155416   28182 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:47:52.155583   28182 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:47:52.243083   28182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:52.257238   28182 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:52.257272   28182 api_server.go:166] Checking apiserver status ...
	I0721 23:47:52.257303   28182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:52.270724   28182 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:47:52.279839   28182 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:52.279897   28182 ssh_runner.go:195] Run: ls
	I0721 23:47:52.283721   28182 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:52.290657   28182 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:52.290683   28182 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:47:52.290694   28182 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:52.290714   28182 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:47:52.291109   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.291146   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.306907   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0721 23:47:52.307373   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.307879   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.307904   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.308209   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.308526   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:47:52.310212   28182 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:47:52.310227   28182 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:52.310541   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.310595   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.324904   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0721 23:47:52.325316   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.325756   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.325775   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.326045   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.326215   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:47:52.328925   28182 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:52.329351   28182 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:52.329379   28182 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:52.329532   28182 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:52.329931   28182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:52.329972   28182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:52.344670   28182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0721 23:47:52.345040   28182 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:52.345474   28182 main.go:141] libmachine: Using API Version  1
	I0721 23:47:52.345500   28182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:52.345786   28182 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:52.345910   28182 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:47:52.346079   28182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:52.346101   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:47:52.348530   28182 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:52.348875   28182 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:52.348909   28182 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:52.349001   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:47:52.349184   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:47:52.349340   28182 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:47:52.349467   28182 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:47:52.425975   28182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:52.439289   28182 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0721 23:47:54.283436   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (3.965333783s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:47:54.802501   28290 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:47:54.802643   28290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:54.802654   28290 out.go:304] Setting ErrFile to fd 2...
	I0721 23:47:54.802661   28290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:47:54.802837   28290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:47:54.802984   28290 out.go:298] Setting JSON to false
	I0721 23:47:54.803010   28290 mustload.go:65] Loading cluster: ha-564251
	I0721 23:47:54.803071   28290 notify.go:220] Checking for updates...
	I0721 23:47:54.803342   28290 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:47:54.803355   28290 status.go:255] checking status of ha-564251 ...
	I0721 23:47:54.803733   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:54.803796   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:54.823335   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0721 23:47:54.823779   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:54.824411   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:54.824443   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:54.824813   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:54.825011   28290 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:47:54.826820   28290 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:47:54.826842   28290 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:54.827113   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:54.827146   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:54.842152   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0721 23:47:54.842590   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:54.843056   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:54.843079   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:54.843382   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:54.843572   28290 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:47:54.846400   28290 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:54.846881   28290 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:54.846903   28290 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:54.847063   28290 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:47:54.847344   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:54.847377   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:54.863592   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0721 23:47:54.863970   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:54.864498   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:54.864521   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:54.864855   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:54.865051   28290 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:47:54.865335   28290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:54.865367   28290 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:47:54.868450   28290 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:54.868897   28290 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:47:54.868925   28290 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:47:54.869041   28290 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:47:54.869214   28290 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:47:54.869334   28290 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:47:54.869446   28290 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:47:54.946750   28290 ssh_runner.go:195] Run: systemctl --version
	I0721 23:47:54.953195   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:54.969193   28290 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:54.969216   28290 api_server.go:166] Checking apiserver status ...
	I0721 23:47:54.969244   28290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:54.982148   28290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:47:54.990737   28290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:54.990802   28290 ssh_runner.go:195] Run: ls
	I0721 23:47:54.996165   28290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:55.001588   28290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:55.001609   28290 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:47:55.001618   28290 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:55.001632   28290 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:47:55.001898   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:55.001931   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:55.017755   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I0721 23:47:55.018112   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:55.018640   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:55.018678   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:55.019004   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:55.019248   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:47:55.020617   28290 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:47:55.020632   28290 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:55.020902   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:55.020930   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:55.035576   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0721 23:47:55.035936   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:55.036378   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:55.036401   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:55.036732   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:55.036982   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:47:55.039868   28290 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:55.040290   28290 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:55.040316   28290 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:55.040497   28290 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:47:55.040790   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:55.040838   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:55.057632   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0721 23:47:55.058078   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:55.058504   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:55.058534   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:55.058900   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:55.059073   28290 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:47:55.059301   28290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:55.059329   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:47:55.062241   28290 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:55.062679   28290 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:47:55.062705   28290 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:47:55.062818   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:47:55.062999   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:47:55.063169   28290 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:47:55.063339   28290 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:47:55.166820   28290 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:55.166890   28290 retry.go:31] will retry after 178.669427ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:58.398881   28290 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:47:58.398971   28290 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:47:58.398988   28290 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:58.398997   28290 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:47:58.399023   28290 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:47:58.399033   28290 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:47:58.399341   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.399386   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.414050   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0721 23:47:58.414463   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.414909   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.414935   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.415256   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.415458   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:47:58.417023   28290 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:47:58.417038   28290 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:58.417365   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.417398   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.431745   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0721 23:47:58.432114   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.432564   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.432588   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.432879   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.433048   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:47:58.435732   28290 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:58.436145   28290 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:58.436184   28290 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:58.436292   28290 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:47:58.436577   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.436606   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.450517   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0721 23:47:58.450951   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.451514   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.451541   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.451822   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.451998   28290 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:47:58.452178   28290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:58.452199   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:47:58.455002   28290 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:58.455510   28290 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:47:58.455551   28290 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:47:58.455674   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:47:58.455837   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:47:58.455978   28290 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:47:58.456092   28290 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:47:58.534103   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:58.548282   28290 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:47:58.548314   28290 api_server.go:166] Checking apiserver status ...
	I0721 23:47:58.548353   28290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:47:58.561641   28290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:47:58.570810   28290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:47:58.570855   28290 ssh_runner.go:195] Run: ls
	I0721 23:47:58.574749   28290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:47:58.578979   28290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:47:58.578998   28290 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:47:58.579013   28290 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:47:58.579028   28290 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:47:58.579376   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.579410   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.594055   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0721 23:47:58.594480   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.594902   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.594921   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.595212   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.595382   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:47:58.596786   28290 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:47:58.596813   28290 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:58.597063   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.597093   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.611218   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0721 23:47:58.611656   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.612063   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.612089   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.612383   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.612570   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:47:58.615135   28290 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:58.615553   28290 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:58.615600   28290 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:58.615695   28290 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:47:58.615963   28290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:47:58.615994   28290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:47:58.631233   28290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41953
	I0721 23:47:58.631579   28290 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:47:58.631958   28290 main.go:141] libmachine: Using API Version  1
	I0721 23:47:58.631979   28290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:47:58.632259   28290 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:47:58.632423   28290 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:47:58.632597   28290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:47:58.632618   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:47:58.635473   28290 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:58.635913   28290 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:47:58.635940   28290 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:47:58.636068   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:47:58.636213   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:47:58.636372   28290 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:47:58.636501   28290 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:47:58.713817   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:47:58.726556   28290 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (3.701426728s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:48:02.622388   28407 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:48:02.622479   28407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:02.622487   28407 out.go:304] Setting ErrFile to fd 2...
	I0721 23:48:02.622491   28407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:02.622722   28407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:48:02.622870   28407 out.go:298] Setting JSON to false
	I0721 23:48:02.622896   28407 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:02.622999   28407 notify.go:220] Checking for updates...
	I0721 23:48:02.623289   28407 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:02.623302   28407 status.go:255] checking status of ha-564251 ...
	I0721 23:48:02.623665   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.623730   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.643476   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0721 23:48:02.644001   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.644551   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.644588   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.644893   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.645069   28407 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:48:02.646672   28407 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:48:02.646689   28407 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:02.646934   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.646968   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.661291   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0721 23:48:02.661676   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.662116   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.662138   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.662487   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.662726   28407 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:48:02.665376   28407 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:02.665780   28407 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:02.665799   28407 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:02.665909   28407 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:02.666187   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.666230   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.680191   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0721 23:48:02.680567   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.681011   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.681034   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.681312   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.681467   28407 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:48:02.681674   28407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:02.681706   28407 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:48:02.684175   28407 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:02.684572   28407 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:02.684597   28407 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:02.684717   28407 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:48:02.684844   28407 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:48:02.684967   28407 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:48:02.685087   28407 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:48:02.758336   28407 ssh_runner.go:195] Run: systemctl --version
	I0721 23:48:02.764560   28407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:02.780537   28407 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:02.780561   28407 api_server.go:166] Checking apiserver status ...
	I0721 23:48:02.780595   28407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:02.794897   28407 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:48:02.803774   28407 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:02.803822   28407 ssh_runner.go:195] Run: ls
	I0721 23:48:02.808167   28407 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:02.812042   28407 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:02.812060   28407 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:48:02.812069   28407 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:02.812084   28407 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:48:02.812361   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.812395   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.827726   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0721 23:48:02.828088   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.828499   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.828517   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.828817   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.829000   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:48:02.830533   28407 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:48:02.830549   28407 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:48:02.830858   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.830889   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.844912   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0721 23:48:02.845278   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.845862   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.845891   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.846191   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.846404   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:48:02.849380   28407 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:48:02.849832   28407 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:48:02.849857   28407 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:48:02.850038   28407 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:48:02.850344   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:02.850379   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:02.866149   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0721 23:48:02.866573   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:02.867085   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:02.867110   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:02.867432   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:02.867639   28407 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:48:02.867846   28407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:02.867870   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:48:02.870449   28407 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:48:02.870940   28407 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:48:02.870993   28407 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:48:02.871084   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:48:02.871281   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:48:02.871462   28407 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:48:02.871623   28407 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	W0721 23:48:05.951021   28407 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0721 23:48:05.951117   28407 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0721 23:48:05.951131   28407 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:48:05.951140   28407 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:48:05.951158   28407 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0721 23:48:05.951166   28407 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:48:05.951512   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:05.951557   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:05.965916   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0721 23:48:05.966345   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:05.966820   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:05.966842   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:05.967106   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:05.967276   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:48:05.968752   28407 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:48:05.968769   28407 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:05.969091   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:05.969128   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:05.984178   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0721 23:48:05.984592   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:05.985028   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:05.985048   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:05.985347   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:05.985523   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:48:05.988001   28407 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:05.988393   28407 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:05.988429   28407 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:05.988548   28407 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:05.988919   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:05.988955   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:06.002897   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0721 23:48:06.003297   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:06.003816   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:06.003837   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:06.004133   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:06.004314   28407 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:48:06.004491   28407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:06.004517   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:48:06.006999   28407 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:06.007416   28407 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:06.007447   28407 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:06.007581   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:48:06.007723   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:48:06.007846   28407 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:48:06.008040   28407 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:48:06.085647   28407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:06.099511   28407 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:06.099537   28407 api_server.go:166] Checking apiserver status ...
	I0721 23:48:06.099569   28407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:06.112891   28407 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:48:06.122709   28407 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:06.122758   28407 ssh_runner.go:195] Run: ls
	I0721 23:48:06.126824   28407 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:06.130938   28407 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:06.130956   28407 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:48:06.130964   28407 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:06.130979   28407 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:48:06.131267   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:06.131308   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:06.146108   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
	I0721 23:48:06.146495   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:06.146975   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:06.146994   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:06.147324   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:06.147491   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:06.149022   28407 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:48:06.149039   28407 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:06.149311   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:06.149347   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:06.164174   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0721 23:48:06.164542   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:06.164920   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:06.164939   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:06.165235   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:06.165436   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:48:06.168023   28407 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:06.168440   28407 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:06.168474   28407 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:06.168550   28407 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:06.168835   28407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:06.168872   28407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:06.183676   28407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
	I0721 23:48:06.184075   28407 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:06.184508   28407 main.go:141] libmachine: Using API Version  1
	I0721 23:48:06.184526   28407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:06.184821   28407 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:06.184999   28407 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:48:06.185158   28407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:06.185176   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:48:06.187754   28407 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:06.188110   28407 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:06.188138   28407 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:06.188252   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:48:06.188421   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:48:06.188554   28407 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:48:06.188685   28407 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:48:06.269212   28407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:06.282370   28407 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 7 (590.875987ms)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:48:13.328572   28545 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:48:13.328668   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:13.328675   28545 out.go:304] Setting ErrFile to fd 2...
	I0721 23:48:13.328679   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:13.328833   28545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:48:13.328984   28545 out.go:298] Setting JSON to false
	I0721 23:48:13.329012   28545 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:13.329115   28545 notify.go:220] Checking for updates...
	I0721 23:48:13.329356   28545 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:13.329372   28545 status.go:255] checking status of ha-564251 ...
	I0721 23:48:13.329721   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.329772   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.349922   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0721 23:48:13.350389   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.350915   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.350934   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.351208   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.351397   28545 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:48:13.353100   28545 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:48:13.353116   28545 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:13.353423   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.353460   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.367396   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0721 23:48:13.367749   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.368191   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.368241   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.368541   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.368721   28545 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:48:13.371352   28545 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:13.371775   28545 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:13.371797   28545 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:13.371920   28545 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:13.372228   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.372293   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.387468   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0721 23:48:13.387880   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.388307   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.388332   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.388682   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.388873   28545 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:48:13.389088   28545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:13.389119   28545 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:48:13.391773   28545 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:13.392174   28545 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:13.392198   28545 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:13.392295   28545 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:48:13.392463   28545 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:48:13.392690   28545 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:48:13.392810   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:48:13.474437   28545 ssh_runner.go:195] Run: systemctl --version
	I0721 23:48:13.480172   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:13.496851   28545 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:13.496890   28545 api_server.go:166] Checking apiserver status ...
	I0721 23:48:13.496936   28545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:13.512673   28545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:48:13.522660   28545 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:13.522721   28545 ssh_runner.go:195] Run: ls
	I0721 23:48:13.526951   28545 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:13.531379   28545 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:13.531400   28545 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:48:13.531409   28545 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:13.531425   28545 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:48:13.531744   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.531776   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.546424   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I0721 23:48:13.546826   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.547254   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.547301   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.547632   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.547804   28545 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:48:13.549465   28545 status.go:330] ha-564251-m02 host status = "Stopped" (err=<nil>)
	I0721 23:48:13.549479   28545 status.go:343] host is not running, skipping remaining checks
	I0721 23:48:13.549485   28545 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:13.549504   28545 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:48:13.549804   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.549838   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.564441   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0721 23:48:13.564928   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.565409   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.565431   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.565804   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.566003   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:48:13.567433   28545 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:48:13.567451   28545 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:13.567731   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.567763   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.581862   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0721 23:48:13.582254   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.582738   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.582756   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.583010   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.583216   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:48:13.585850   28545 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:13.586301   28545 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:13.586321   28545 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:13.586470   28545 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:13.586822   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.586861   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.601002   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0721 23:48:13.601462   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.601874   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.601892   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.602171   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.602364   28545 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:48:13.602524   28545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:13.602546   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:48:13.605329   28545 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:13.605724   28545 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:13.605751   28545 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:13.605889   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:48:13.606021   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:48:13.606152   28545 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:48:13.606255   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:48:13.685378   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:13.700153   28545 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:13.700177   28545 api_server.go:166] Checking apiserver status ...
	I0721 23:48:13.700207   28545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:13.714207   28545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:48:13.723935   28545 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:13.723985   28545 ssh_runner.go:195] Run: ls
	I0721 23:48:13.728047   28545 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:13.732462   28545 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:13.732486   28545 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:48:13.732497   28545 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:13.732517   28545 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:48:13.732915   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.732958   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.748076   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39151
	I0721 23:48:13.748503   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.748939   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.748959   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.749321   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.749504   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:13.751011   28545 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:48:13.751026   28545 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:13.751340   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.751371   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.765594   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0721 23:48:13.766043   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.766521   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.766539   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.766822   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.766990   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:48:13.769885   28545 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:13.770346   28545 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:13.770379   28545 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:13.770497   28545 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:13.770870   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:13.770922   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:13.785449   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0721 23:48:13.785855   28545 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:13.786337   28545 main.go:141] libmachine: Using API Version  1
	I0721 23:48:13.786354   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:13.786720   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:13.786903   28545 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:48:13.787090   28545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:13.787112   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:48:13.789958   28545 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:13.790384   28545 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:13.790407   28545 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:13.790533   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:48:13.790723   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:48:13.790873   28545 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:48:13.790997   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:48:13.865983   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:13.878680   28545 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 7 (597.734373ms)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:48:22.476027   28649 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:48:22.476180   28649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:22.476187   28649 out.go:304] Setting ErrFile to fd 2...
	I0721 23:48:22.476194   28649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:22.476383   28649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:48:22.476605   28649 out.go:298] Setting JSON to false
	I0721 23:48:22.476643   28649 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:22.476912   28649 notify.go:220] Checking for updates...
	I0721 23:48:22.477192   28649 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:22.477215   28649 status.go:255] checking status of ha-564251 ...
	I0721 23:48:22.477748   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.477801   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.497877   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0721 23:48:22.498329   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.498962   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.498982   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.499357   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.499577   28649 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:48:22.501241   28649 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:48:22.501259   28649 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:22.501575   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.501617   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.517058   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0721 23:48:22.517419   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.517905   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.517924   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.518236   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.518417   28649 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:48:22.520937   28649 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:22.521419   28649 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:22.521457   28649 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:22.521544   28649 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:22.521799   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.521829   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.536844   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0721 23:48:22.537241   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.537658   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.537676   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.538005   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.538186   28649 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:48:22.538396   28649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:22.538418   28649 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:48:22.540740   28649 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:22.541096   28649 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:22.541130   28649 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:22.541252   28649 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:48:22.541412   28649 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:48:22.541537   28649 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:48:22.541676   28649 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:48:22.618574   28649 ssh_runner.go:195] Run: systemctl --version
	I0721 23:48:22.624869   28649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:22.639474   28649 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:22.639499   28649 api_server.go:166] Checking apiserver status ...
	I0721 23:48:22.639528   28649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:22.654448   28649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:48:22.663797   28649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:22.663869   28649 ssh_runner.go:195] Run: ls
	I0721 23:48:22.667986   28649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:22.673537   28649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:22.673561   28649 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:48:22.673599   28649 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:22.673634   28649 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:48:22.673908   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.673948   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.689286   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0721 23:48:22.689706   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.690157   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.690177   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.690448   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.690637   28649 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:48:22.692135   28649 status.go:330] ha-564251-m02 host status = "Stopped" (err=<nil>)
	I0721 23:48:22.692149   28649 status.go:343] host is not running, skipping remaining checks
	I0721 23:48:22.692157   28649 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:22.692188   28649 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:48:22.692619   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.692673   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.707932   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0721 23:48:22.708399   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.708839   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.708858   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.709201   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.709368   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:48:22.710694   28649 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:48:22.710707   28649 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:22.710993   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.711036   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.726324   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0721 23:48:22.726780   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.727289   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.727308   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.727599   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.727791   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:48:22.730949   28649 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:22.731375   28649 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:22.731401   28649 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:22.731542   28649 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:22.731871   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.731907   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.746890   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0721 23:48:22.747319   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.747763   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.747787   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.748061   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.748228   28649 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:48:22.748421   28649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:22.748442   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:48:22.751171   28649 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:22.751538   28649 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:22.751562   28649 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:22.751698   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:48:22.751850   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:48:22.751996   28649 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:48:22.752117   28649 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:48:22.833827   28649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:22.847860   28649 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:22.847897   28649 api_server.go:166] Checking apiserver status ...
	I0721 23:48:22.847950   28649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:22.862104   28649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:48:22.871322   28649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:22.871380   28649 ssh_runner.go:195] Run: ls
	I0721 23:48:22.876006   28649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:22.880248   28649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:22.880272   28649 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:48:22.880283   28649 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:22.880303   28649 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:48:22.880686   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.880720   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.896064   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0721 23:48:22.896443   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.896896   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.896922   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.897207   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.897403   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:22.898802   28649 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:48:22.898816   28649 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:22.899093   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.899133   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.913442   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0721 23:48:22.913826   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.914253   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.914273   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.914557   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.914756   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:48:22.917296   28649 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:22.917726   28649 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:22.917744   28649 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:22.917864   28649 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:22.918123   28649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:22.918153   28649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:22.934164   28649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I0721 23:48:22.934762   28649 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:22.935266   28649 main.go:141] libmachine: Using API Version  1
	I0721 23:48:22.935290   28649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:22.935606   28649 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:22.935816   28649 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:48:22.936025   28649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:22.936045   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:48:22.938816   28649 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:22.939180   28649 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:22.939206   28649 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:22.939349   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:48:22.939489   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:48:22.939690   28649 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:48:22.939791   28649 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:48:23.017361   28649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:23.031750   28649 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 7 (583.287211ms)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-564251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:48:33.945561   28755 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:48:33.945669   28755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:33.945678   28755 out.go:304] Setting ErrFile to fd 2...
	I0721 23:48:33.945683   28755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:33.945865   28755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:48:33.946023   28755 out.go:298] Setting JSON to false
	I0721 23:48:33.946051   28755 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:33.946164   28755 notify.go:220] Checking for updates...
	I0721 23:48:33.946394   28755 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:33.946409   28755 status.go:255] checking status of ha-564251 ...
	I0721 23:48:33.946822   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:33.946871   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:33.964919   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I0721 23:48:33.965396   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:33.966098   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:33.966124   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:33.966572   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:33.966824   28755 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:48:33.968206   28755 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:48:33.968219   28755 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:33.968495   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:33.968530   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:33.982979   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36035
	I0721 23:48:33.983388   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:33.983866   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:33.983884   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:33.984146   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:33.984323   28755 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:48:33.986472   28755 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:33.986896   28755 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:33.986921   28755 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:33.987062   28755 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:48:33.987368   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:33.987404   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.001331   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0721 23:48:34.001745   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.002192   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.002209   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.002463   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.002618   28755 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:48:34.002780   28755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:34.002814   28755 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:48:34.005411   28755 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:34.005802   28755 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:48:34.005835   28755 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:48:34.005986   28755 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:48:34.006156   28755 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:48:34.006313   28755 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:48:34.006465   28755 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:48:34.082581   28755 ssh_runner.go:195] Run: systemctl --version
	I0721 23:48:34.088615   28755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:34.103963   28755 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:34.103990   28755 api_server.go:166] Checking apiserver status ...
	I0721 23:48:34.104029   28755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:34.117441   28755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	W0721 23:48:34.126663   28755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:34.126709   28755 ssh_runner.go:195] Run: ls
	I0721 23:48:34.130627   28755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:34.134516   28755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:34.134536   28755 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:48:34.134548   28755 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:34.134566   28755 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:48:34.134867   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.134907   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.149431   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0721 23:48:34.149873   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.150388   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.150413   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.150756   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.150919   28755 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:48:34.152287   28755 status.go:330] ha-564251-m02 host status = "Stopped" (err=<nil>)
	I0721 23:48:34.152299   28755 status.go:343] host is not running, skipping remaining checks
	I0721 23:48:34.152307   28755 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:34.152348   28755 status.go:255] checking status of ha-564251-m03 ...
	I0721 23:48:34.152669   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.152705   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.167110   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0721 23:48:34.167527   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.167954   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.167975   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.168243   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.168423   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:48:34.169878   28755 status.go:330] ha-564251-m03 host status = "Running" (err=<nil>)
	I0721 23:48:34.169904   28755 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:34.170163   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.170192   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.185076   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I0721 23:48:34.185563   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.186006   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.186030   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.186364   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.186533   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:48:34.189167   28755 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:34.189610   28755 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:34.189636   28755 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:34.189764   28755 host.go:66] Checking if "ha-564251-m03" exists ...
	I0721 23:48:34.190053   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.190096   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.204460   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0721 23:48:34.204870   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.205291   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.205306   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.205649   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.205859   28755 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:48:34.206099   28755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:34.206125   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:48:34.209310   28755 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:34.209846   28755 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:34.209876   28755 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:34.210031   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:48:34.210205   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:48:34.210396   28755 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:48:34.210579   28755 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:48:34.289931   28755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:34.305410   28755 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:48:34.305436   28755 api_server.go:166] Checking apiserver status ...
	I0721 23:48:34.305466   28755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:48:34.321068   28755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup
	W0721 23:48:34.330567   28755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:48:34.330648   28755 ssh_runner.go:195] Run: ls
	I0721 23:48:34.334800   28755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:48:34.339529   28755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:48:34.339552   28755 status.go:422] ha-564251-m03 apiserver status = Running (err=<nil>)
	I0721 23:48:34.339563   28755 status.go:257] ha-564251-m03 status: &{Name:ha-564251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:48:34.339583   28755 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:48:34.339958   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.340001   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.354526   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0721 23:48:34.354957   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.355500   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.355522   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.355862   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.356064   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:34.357594   28755 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:48:34.357609   28755 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:34.357968   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.358007   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.373734   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40057
	I0721 23:48:34.374208   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.374790   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.374818   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.375099   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.375291   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:48:34.377811   28755 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:34.378211   28755 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:34.378237   28755 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:34.378333   28755 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:48:34.378598   28755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:34.378676   28755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:34.393990   28755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0721 23:48:34.394462   28755 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:34.394934   28755 main.go:141] libmachine: Using API Version  1
	I0721 23:48:34.394949   28755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:34.395238   28755 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:34.395429   28755 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:48:34.395584   28755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:48:34.395599   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:48:34.398075   28755 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:34.398440   28755 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:34.398478   28755 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:34.398621   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:48:34.398773   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:48:34.398915   28755 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:48:34.399044   28755 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:48:34.473362   28755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:48:34.486323   28755 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-564251 -n ha-564251
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-564251 logs -n 25: (1.36230672s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m03_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m04 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp testdata/cp-test.txt                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m04_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03:/home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m03 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-564251 node stop m02 -v=7                                                     | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-564251 node start m02 -v=7                                                    | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:40:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:40:40.546278   23196 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:40.546413   23196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:40.546425   23196 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:40.546431   23196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:40.546636   23196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:40.547182   23196 out.go:298] Setting JSON to false
	I0721 23:40:40.548067   23196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1385,"bootTime":1721603856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:40:40.548125   23196 start.go:139] virtualization: kvm guest
	I0721 23:40:40.550458   23196 out.go:177] * [ha-564251] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:40:40.551991   23196 notify.go:220] Checking for updates...
	I0721 23:40:40.552011   23196 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:40:40.553311   23196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:40:40.554713   23196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:40:40.556029   23196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:40.557257   23196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:40:40.558476   23196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:40:40.559903   23196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:40:40.593913   23196 out.go:177] * Using the kvm2 driver based on user configuration
	I0721 23:40:40.595060   23196 start.go:297] selected driver: kvm2
	I0721 23:40:40.595084   23196 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:40:40.595095   23196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:40:40.595784   23196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:40:40.595846   23196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:40:40.610241   23196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:40:40.610301   23196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:40:40.610514   23196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:40:40.610541   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:40:40.610547   23196 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0721 23:40:40.610559   23196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0721 23:40:40.610663   23196 start.go:340] cluster config:
	{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0721 23:40:40.610753   23196 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:40:40.612886   23196 out.go:177] * Starting "ha-564251" primary control-plane node in "ha-564251" cluster
	I0721 23:40:40.613918   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:40:40.613953   23196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:40:40.613962   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:40:40.614031   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:40:40.614045   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:40:40.614355   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:40:40.614381   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json: {Name:mk5a28a63630db995c66c5ccfa02b795741655f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:40:40.614514   23196 start.go:360] acquireMachinesLock for ha-564251: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:40:40.614567   23196 start.go:364] duration metric: took 28.82µs to acquireMachinesLock for "ha-564251"
	I0721 23:40:40.614590   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:40:40.614689   23196 start.go:125] createHost starting for "" (driver="kvm2")
	I0721 23:40:40.616125   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:40:40.616273   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:40.616314   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:40.629715   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0721 23:40:40.630093   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:40.630676   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:40:40.630696   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:40.631015   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:40.631203   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:40:40.631366   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:40.631515   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:40:40.631542   23196 client.go:168] LocalClient.Create starting
	I0721 23:40:40.631579   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:40:40.631619   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:40:40.631637   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:40:40.631704   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:40:40.631727   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:40:40.631746   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:40:40.631776   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:40:40.631787   23196 main.go:141] libmachine: (ha-564251) Calling .PreCreateCheck
	I0721 23:40:40.632105   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:40:40.632476   23196 main.go:141] libmachine: Creating machine...
	I0721 23:40:40.632491   23196 main.go:141] libmachine: (ha-564251) Calling .Create
	I0721 23:40:40.632600   23196 main.go:141] libmachine: (ha-564251) Creating KVM machine...
	I0721 23:40:40.633705   23196 main.go:141] libmachine: (ha-564251) DBG | found existing default KVM network
	I0721 23:40:40.634328   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.634206   23219 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0721 23:40:40.634379   23196 main.go:141] libmachine: (ha-564251) DBG | created network xml: 
	I0721 23:40:40.634396   23196 main.go:141] libmachine: (ha-564251) DBG | <network>
	I0721 23:40:40.634411   23196 main.go:141] libmachine: (ha-564251) DBG |   <name>mk-ha-564251</name>
	I0721 23:40:40.634417   23196 main.go:141] libmachine: (ha-564251) DBG |   <dns enable='no'/>
	I0721 23:40:40.634424   23196 main.go:141] libmachine: (ha-564251) DBG |   
	I0721 23:40:40.634431   23196 main.go:141] libmachine: (ha-564251) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0721 23:40:40.634442   23196 main.go:141] libmachine: (ha-564251) DBG |     <dhcp>
	I0721 23:40:40.634452   23196 main.go:141] libmachine: (ha-564251) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0721 23:40:40.634461   23196 main.go:141] libmachine: (ha-564251) DBG |     </dhcp>
	I0721 23:40:40.634474   23196 main.go:141] libmachine: (ha-564251) DBG |   </ip>
	I0721 23:40:40.634484   23196 main.go:141] libmachine: (ha-564251) DBG |   
	I0721 23:40:40.634495   23196 main.go:141] libmachine: (ha-564251) DBG | </network>
	I0721 23:40:40.634515   23196 main.go:141] libmachine: (ha-564251) DBG | 
	I0721 23:40:40.639387   23196 main.go:141] libmachine: (ha-564251) DBG | trying to create private KVM network mk-ha-564251 192.168.39.0/24...
	I0721 23:40:40.701034   23196 main.go:141] libmachine: (ha-564251) DBG | private KVM network mk-ha-564251 192.168.39.0/24 created
	I0721 23:40:40.701077   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.700975   23219 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:40.701091   23196 main.go:141] libmachine: (ha-564251) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 ...
	I0721 23:40:40.701111   23196 main.go:141] libmachine: (ha-564251) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:40:40.701138   23196 main.go:141] libmachine: (ha-564251) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:40:40.947585   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:40.947443   23219 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa...
	I0721 23:40:41.145755   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:41.145633   23219 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/ha-564251.rawdisk...
	I0721 23:40:41.145788   23196 main.go:141] libmachine: (ha-564251) DBG | Writing magic tar header
	I0721 23:40:41.145800   23196 main.go:141] libmachine: (ha-564251) DBG | Writing SSH key tar header
	I0721 23:40:41.145807   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:41.145755   23219 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 ...
	I0721 23:40:41.145887   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251
	I0721 23:40:41.145903   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:40:41.145915   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251 (perms=drwx------)
	I0721 23:40:41.145943   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:40:41.145951   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:41.145957   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:40:41.145974   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:40:41.145990   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:40:41.146003   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:40:41.146017   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:40:41.146025   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:40:41.146031   23196 main.go:141] libmachine: (ha-564251) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:40:41.146039   23196 main.go:141] libmachine: (ha-564251) Creating domain...
	I0721 23:40:41.146050   23196 main.go:141] libmachine: (ha-564251) DBG | Checking permissions on dir: /home
	I0721 23:40:41.146058   23196 main.go:141] libmachine: (ha-564251) DBG | Skipping /home - not owner
	I0721 23:40:41.147119   23196 main.go:141] libmachine: (ha-564251) define libvirt domain using xml: 
	I0721 23:40:41.147144   23196 main.go:141] libmachine: (ha-564251) <domain type='kvm'>
	I0721 23:40:41.147155   23196 main.go:141] libmachine: (ha-564251)   <name>ha-564251</name>
	I0721 23:40:41.147172   23196 main.go:141] libmachine: (ha-564251)   <memory unit='MiB'>2200</memory>
	I0721 23:40:41.147184   23196 main.go:141] libmachine: (ha-564251)   <vcpu>2</vcpu>
	I0721 23:40:41.147192   23196 main.go:141] libmachine: (ha-564251)   <features>
	I0721 23:40:41.147202   23196 main.go:141] libmachine: (ha-564251)     <acpi/>
	I0721 23:40:41.147213   23196 main.go:141] libmachine: (ha-564251)     <apic/>
	I0721 23:40:41.147225   23196 main.go:141] libmachine: (ha-564251)     <pae/>
	I0721 23:40:41.147236   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147248   23196 main.go:141] libmachine: (ha-564251)   </features>
	I0721 23:40:41.147263   23196 main.go:141] libmachine: (ha-564251)   <cpu mode='host-passthrough'>
	I0721 23:40:41.147274   23196 main.go:141] libmachine: (ha-564251)   
	I0721 23:40:41.147281   23196 main.go:141] libmachine: (ha-564251)   </cpu>
	I0721 23:40:41.147293   23196 main.go:141] libmachine: (ha-564251)   <os>
	I0721 23:40:41.147303   23196 main.go:141] libmachine: (ha-564251)     <type>hvm</type>
	I0721 23:40:41.147315   23196 main.go:141] libmachine: (ha-564251)     <boot dev='cdrom'/>
	I0721 23:40:41.147343   23196 main.go:141] libmachine: (ha-564251)     <boot dev='hd'/>
	I0721 23:40:41.147354   23196 main.go:141] libmachine: (ha-564251)     <bootmenu enable='no'/>
	I0721 23:40:41.147363   23196 main.go:141] libmachine: (ha-564251)   </os>
	I0721 23:40:41.147370   23196 main.go:141] libmachine: (ha-564251)   <devices>
	I0721 23:40:41.147380   23196 main.go:141] libmachine: (ha-564251)     <disk type='file' device='cdrom'>
	I0721 23:40:41.147395   23196 main.go:141] libmachine: (ha-564251)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/boot2docker.iso'/>
	I0721 23:40:41.147408   23196 main.go:141] libmachine: (ha-564251)       <target dev='hdc' bus='scsi'/>
	I0721 23:40:41.147418   23196 main.go:141] libmachine: (ha-564251)       <readonly/>
	I0721 23:40:41.147429   23196 main.go:141] libmachine: (ha-564251)     </disk>
	I0721 23:40:41.147441   23196 main.go:141] libmachine: (ha-564251)     <disk type='file' device='disk'>
	I0721 23:40:41.147453   23196 main.go:141] libmachine: (ha-564251)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:40:41.147469   23196 main.go:141] libmachine: (ha-564251)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/ha-564251.rawdisk'/>
	I0721 23:40:41.147480   23196 main.go:141] libmachine: (ha-564251)       <target dev='hda' bus='virtio'/>
	I0721 23:40:41.147491   23196 main.go:141] libmachine: (ha-564251)     </disk>
	I0721 23:40:41.147505   23196 main.go:141] libmachine: (ha-564251)     <interface type='network'>
	I0721 23:40:41.147525   23196 main.go:141] libmachine: (ha-564251)       <source network='mk-ha-564251'/>
	I0721 23:40:41.147536   23196 main.go:141] libmachine: (ha-564251)       <model type='virtio'/>
	I0721 23:40:41.147546   23196 main.go:141] libmachine: (ha-564251)     </interface>
	I0721 23:40:41.147568   23196 main.go:141] libmachine: (ha-564251)     <interface type='network'>
	I0721 23:40:41.147581   23196 main.go:141] libmachine: (ha-564251)       <source network='default'/>
	I0721 23:40:41.147588   23196 main.go:141] libmachine: (ha-564251)       <model type='virtio'/>
	I0721 23:40:41.147606   23196 main.go:141] libmachine: (ha-564251)     </interface>
	I0721 23:40:41.147616   23196 main.go:141] libmachine: (ha-564251)     <serial type='pty'>
	I0721 23:40:41.147645   23196 main.go:141] libmachine: (ha-564251)       <target port='0'/>
	I0721 23:40:41.147663   23196 main.go:141] libmachine: (ha-564251)     </serial>
	I0721 23:40:41.147669   23196 main.go:141] libmachine: (ha-564251)     <console type='pty'>
	I0721 23:40:41.147677   23196 main.go:141] libmachine: (ha-564251)       <target type='serial' port='0'/>
	I0721 23:40:41.147690   23196 main.go:141] libmachine: (ha-564251)     </console>
	I0721 23:40:41.147698   23196 main.go:141] libmachine: (ha-564251)     <rng model='virtio'>
	I0721 23:40:41.147703   23196 main.go:141] libmachine: (ha-564251)       <backend model='random'>/dev/random</backend>
	I0721 23:40:41.147710   23196 main.go:141] libmachine: (ha-564251)     </rng>
	I0721 23:40:41.147714   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147721   23196 main.go:141] libmachine: (ha-564251)     
	I0721 23:40:41.147730   23196 main.go:141] libmachine: (ha-564251)   </devices>
	I0721 23:40:41.147737   23196 main.go:141] libmachine: (ha-564251) </domain>
	I0721 23:40:41.147741   23196 main.go:141] libmachine: (ha-564251) 
	I0721 23:40:41.152060   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:1b:3f:cc in network default
	I0721 23:40:41.152594   23196 main.go:141] libmachine: (ha-564251) Ensuring networks are active...
	I0721 23:40:41.152616   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:41.153170   23196 main.go:141] libmachine: (ha-564251) Ensuring network default is active
	I0721 23:40:41.153584   23196 main.go:141] libmachine: (ha-564251) Ensuring network mk-ha-564251 is active
	I0721 23:40:41.154236   23196 main.go:141] libmachine: (ha-564251) Getting domain xml...
	I0721 23:40:41.154951   23196 main.go:141] libmachine: (ha-564251) Creating domain...
	I0721 23:40:42.321898   23196 main.go:141] libmachine: (ha-564251) Waiting to get IP...
	I0721 23:40:42.322641   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.323001   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.323045   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.322986   23219 retry.go:31] will retry after 226.990581ms: waiting for machine to come up
	I0721 23:40:42.551449   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.551889   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.551917   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.551843   23219 retry.go:31] will retry after 345.157454ms: waiting for machine to come up
	I0721 23:40:42.898184   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:42.898667   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:42.898716   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:42.898637   23219 retry.go:31] will retry after 450.376972ms: waiting for machine to come up
	I0721 23:40:43.350132   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:43.350532   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:43.350567   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:43.350476   23219 retry.go:31] will retry after 548.229138ms: waiting for machine to come up
	I0721 23:40:43.900112   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:43.900526   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:43.900558   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:43.900490   23219 retry.go:31] will retry after 742.775493ms: waiting for machine to come up
	I0721 23:40:44.645071   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:44.645486   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:44.645513   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:44.645434   23219 retry.go:31] will retry after 784.586324ms: waiting for machine to come up
	I0721 23:40:45.431400   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:45.431765   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:45.431801   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:45.431727   23219 retry.go:31] will retry after 1.075109633s: waiting for machine to come up
	I0721 23:40:46.508612   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:46.509010   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:46.509035   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:46.508968   23219 retry.go:31] will retry after 1.2901904s: waiting for machine to come up
	I0721 23:40:47.801398   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:47.801883   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:47.801911   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:47.801825   23219 retry.go:31] will retry after 1.682137152s: waiting for machine to come up
	I0721 23:40:49.486662   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:49.487036   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:49.487066   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:49.486988   23219 retry.go:31] will retry after 1.799508967s: waiting for machine to come up
	I0721 23:40:51.287656   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:51.288059   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:51.288085   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:51.288008   23219 retry.go:31] will retry after 2.604882291s: waiting for machine to come up
	I0721 23:40:53.895574   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:53.895902   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:53.895921   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:53.895875   23219 retry.go:31] will retry after 2.265187217s: waiting for machine to come up
	I0721 23:40:56.162821   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:56.163266   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find current IP address of domain ha-564251 in network mk-ha-564251
	I0721 23:40:56.163291   23196 main.go:141] libmachine: (ha-564251) DBG | I0721 23:40:56.163221   23219 retry.go:31] will retry after 3.572604507s: waiting for machine to come up
	I0721 23:40:59.739716   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.740066   23196 main.go:141] libmachine: (ha-564251) Found IP for machine: 192.168.39.91
	I0721 23:40:59.740097   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has current primary IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.740106   23196 main.go:141] libmachine: (ha-564251) Reserving static IP address...
	I0721 23:40:59.740418   23196 main.go:141] libmachine: (ha-564251) DBG | unable to find host DHCP lease matching {name: "ha-564251", mac: "52:54:00:92:9e:c7", ip: "192.168.39.91"} in network mk-ha-564251
	I0721 23:40:59.809966   23196 main.go:141] libmachine: (ha-564251) DBG | Getting to WaitForSSH function...
	I0721 23:40:59.809998   23196 main.go:141] libmachine: (ha-564251) Reserved static IP address: 192.168.39.91
	I0721 23:40:59.810012   23196 main.go:141] libmachine: (ha-564251) Waiting for SSH to be available...
	I0721 23:40:59.812265   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.812627   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:9e:c7}
	I0721 23:40:59.812652   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.812790   23196 main.go:141] libmachine: (ha-564251) DBG | Using SSH client type: external
	I0721 23:40:59.812811   23196 main.go:141] libmachine: (ha-564251) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa (-rw-------)
	I0721 23:40:59.812828   23196 main.go:141] libmachine: (ha-564251) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:40:59.812833   23196 main.go:141] libmachine: (ha-564251) DBG | About to run SSH command:
	I0721 23:40:59.812841   23196 main.go:141] libmachine: (ha-564251) DBG | exit 0
	I0721 23:40:59.930266   23196 main.go:141] libmachine: (ha-564251) DBG | SSH cmd err, output: <nil>: 
	I0721 23:40:59.930538   23196 main.go:141] libmachine: (ha-564251) KVM machine creation complete!
	I0721 23:40:59.930835   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:40:59.931422   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:59.931615   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:40:59.931782   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:40:59.931820   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:40:59.933150   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:40:59.933163   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:40:59.933168   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:40:59.933174   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:40:59.935350   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.935655   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:40:59.935689   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:40:59.935824   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:40:59.935986   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:40:59.936138   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:40:59.936267   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:40:59.936438   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:40:59.936715   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:40:59.936735   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:41:00.033692   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:00.033716   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:41:00.033726   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.036753   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.037113   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.037131   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.037281   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.037582   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.037816   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.037975   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.038123   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.038281   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.038291   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:41:00.134971   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:41:00.135071   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:41:00.135109   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:41:00.135123   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.135381   23196 buildroot.go:166] provisioning hostname "ha-564251"
	I0721 23:41:00.135410   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.135584   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.137805   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.138153   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.138178   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.138331   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.138496   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.138671   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.138815   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.138980   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.139142   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.139152   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251 && echo "ha-564251" | sudo tee /etc/hostname
	I0721 23:41:00.247562   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:41:00.247593   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.250032   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.250427   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.250456   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.250699   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.250867   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.251037   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.251221   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.251397   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.251588   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.251604   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:41:00.354410   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:00.354435   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:41:00.354462   23196 buildroot.go:174] setting up certificates
	I0721 23:41:00.354472   23196 provision.go:84] configureAuth start
	I0721 23:41:00.354480   23196 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:41:00.354804   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:00.357273   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.357634   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.357661   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.357806   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.359631   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.359886   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.359913   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.359992   23196 provision.go:143] copyHostCerts
	I0721 23:41:00.360055   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:00.360099   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:41:00.360116   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:00.360196   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:41:00.360292   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:00.360316   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:41:00.360324   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:00.360360   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:41:00.360460   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:00.360489   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:41:00.360498   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:00.360530   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:41:00.360593   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251 san=[127.0.0.1 192.168.39.91 ha-564251 localhost minikube]
	I0721 23:41:00.448962   23196 provision.go:177] copyRemoteCerts
	I0721 23:41:00.449011   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:41:00.449031   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.451527   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.451855   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.451890   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.452006   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.452202   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.452366   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.452506   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:00.528321   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:41:00.528414   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:41:00.551499   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:41:00.551569   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:41:00.573075   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:41:00.573127   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0721 23:41:00.594064   23196 provision.go:87] duration metric: took 239.579894ms to configureAuth
	I0721 23:41:00.594094   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:41:00.594255   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:00.594334   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.596669   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.596983   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.597008   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.597156   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.597365   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.597515   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.597690   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.597863   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.598012   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.598028   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:41:00.851630   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:41:00.851659   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:41:00.851667   23196 main.go:141] libmachine: (ha-564251) Calling .GetURL
	I0721 23:41:00.852807   23196 main.go:141] libmachine: (ha-564251) DBG | Using libvirt version 6000000
	I0721 23:41:00.854810   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.855075   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.855099   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.855246   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:41:00.855259   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:41:00.855268   23196 client.go:171] duration metric: took 20.223716322s to LocalClient.Create
	I0721 23:41:00.855293   23196 start.go:167] duration metric: took 20.223778038s to libmachine.API.Create "ha-564251"
	I0721 23:41:00.855305   23196 start.go:293] postStartSetup for "ha-564251" (driver="kvm2")
	I0721 23:41:00.855318   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:41:00.855339   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:00.855542   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:41:00.855563   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.857342   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.857731   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.857749   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.857896   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.858145   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.858289   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.858455   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:00.936595   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:41:00.940663   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:41:00.940681   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:41:00.940740   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:41:00.940808   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:41:00.940817   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:41:00.940906   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:41:00.950096   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:00.976055   23196 start.go:296] duration metric: took 120.738688ms for postStartSetup
	I0721 23:41:00.976098   23196 main.go:141] libmachine: (ha-564251) Calling .GetConfigRaw
	I0721 23:41:00.976700   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:00.979268   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.979603   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.979618   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.979846   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:00.980000   23196 start.go:128] duration metric: took 20.365301805s to createHost
	I0721 23:41:00.980018   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:00.982201   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.982498   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:00.982541   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:00.982655   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:00.982885   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.983071   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:00.983240   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:00.983473   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:00.983649   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:41:00.983662   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:41:01.078829   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605261.053137130
	
	I0721 23:41:01.078846   23196 fix.go:216] guest clock: 1721605261.053137130
	I0721 23:41:01.078862   23196 fix.go:229] Guest: 2024-07-21 23:41:01.05313713 +0000 UTC Remote: 2024-07-21 23:41:00.980009736 +0000 UTC m=+20.466637872 (delta=73.127394ms)
	I0721 23:41:01.078890   23196 fix.go:200] guest clock delta is within tolerance: 73.127394ms
	I0721 23:41:01.078895   23196 start.go:83] releasing machines lock for "ha-564251", held for 20.46431804s
	I0721 23:41:01.078911   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.079173   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:01.081997   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.082367   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.082391   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.082540   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083066   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083240   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:01.083343   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:41:01.083392   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:01.083457   23196 ssh_runner.go:195] Run: cat /version.json
	I0721 23:41:01.083482   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:01.085717   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086033   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.086070   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086089   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086205   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:01.086378   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:01.086496   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:01.086521   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:01.086632   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:01.086689   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:01.086821   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:01.086819   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:01.086954   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:01.087174   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:01.187788   23196 ssh_runner.go:195] Run: systemctl --version
	I0721 23:41:01.193357   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:41:01.345767   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:41:01.351542   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:41:01.351601   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:41:01.365775   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:41:01.365792   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:41:01.365842   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:41:01.380850   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:41:01.393445   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:41:01.393503   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:41:01.405644   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:41:01.418583   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:41:01.526640   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:41:01.658590   23196 docker.go:233] disabling docker service ...
	I0721 23:41:01.658658   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:41:01.679251   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:41:01.691467   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:41:01.824984   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:41:01.953360   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:41:01.966263   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:41:01.982934   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:41:01.983004   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:01.992477   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:41:01.992553   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.002358   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.011880   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.021371   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:41:02.031204   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.041031   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.056975   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:02.066514   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:41:02.075217   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:41:02.075276   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:41:02.086572   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:41:02.095451   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:02.225576   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:41:02.354323   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:41:02.354402   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:41:02.358757   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:41:02.358801   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:41:02.362040   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:41:02.399992   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:41:02.400072   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:02.427409   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:02.456411   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:41:02.457787   23196 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:41:02.460589   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:02.460935   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:02.460962   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:02.461140   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:41:02.465058   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:02.477327   23196 kubeadm.go:883] updating cluster {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:41:02.477427   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:41:02.477467   23196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:41:02.508153   23196 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0721 23:41:02.508222   23196 ssh_runner.go:195] Run: which lz4
	I0721 23:41:02.511743   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0721 23:41:02.511843   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 23:41:02.515551   23196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 23:41:02.515580   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0721 23:41:03.710943   23196 crio.go:462] duration metric: took 1.199137138s to copy over tarball
	I0721 23:41:03.711017   23196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 23:41:05.793655   23196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082616359s)
	I0721 23:41:05.793680   23196 crio.go:469] duration metric: took 2.082708301s to extract the tarball
	I0721 23:41:05.793687   23196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 23:41:05.831124   23196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:41:05.872861   23196 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:41:05.872879   23196 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:41:05.872887   23196 kubeadm.go:934] updating node { 192.168.39.91 8443 v1.30.3 crio true true} ...
	I0721 23:41:05.873014   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:41:05.873090   23196 ssh_runner.go:195] Run: crio config
	I0721 23:41:05.913664   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:41:05.913683   23196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 23:41:05.913692   23196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:41:05.913717   23196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-564251 NodeName:ha-564251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:41:05.913875   23196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-564251"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:41:05.913903   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:41:05.913944   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:41:05.932034   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:41:05.932159   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:41:05.932216   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:05.941481   23196 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:41:05.941530   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0721 23:41:05.950214   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0721 23:41:05.967032   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:41:05.982874   23196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0721 23:41:05.997480   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0721 23:41:06.012067   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:41:06.015784   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:06.027237   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:06.142381   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:41:06.159549   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.91
	I0721 23:41:06.159567   23196 certs.go:194] generating shared ca certs ...
	I0721 23:41:06.159582   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.159731   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:41:06.159769   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:41:06.159778   23196 certs.go:256] generating profile certs ...
	I0721 23:41:06.159835   23196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:41:06.159855   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt with IP's: []
	I0721 23:41:06.368527   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt ...
	I0721 23:41:06.368556   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt: {Name:mk4fd652ead42f577c5596c2cceaf3cd9cc210ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.368714   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key ...
	I0721 23:41:06.368724   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key: {Name:mkb22d50d215d5e147d7bc98131bf78c78b3ffb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.368800   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb
	I0721 23:41:06.368814   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.254]
	I0721 23:41:06.571331   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb ...
	I0721 23:41:06.571360   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb: {Name:mk17d073f9fd70c9cc64a6ed93f552a2be0a4d9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.571514   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb ...
	I0721 23:41:06.571526   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb: {Name:mk769c41017d78a39c6d3d1328ad259c5de648a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.571591   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.277f15eb -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:41:06.571671   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.277f15eb -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:41:06.571725   23196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:41:06.571740   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt with IP's: []
	I0721 23:41:06.759255   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt ...
	I0721 23:41:06.759280   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt: {Name:mk94f17fb27624bf2677b9a0c6710678fdcfe163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.759426   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key ...
	I0721 23:41:06.759437   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key: {Name:mk36259a9d79f8aa2c13c70a83696bd241408831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:06.759500   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:41:06.759512   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:41:06.759527   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:41:06.759563   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:41:06.759581   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:41:06.759592   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:41:06.759602   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:41:06.759613   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:41:06.759657   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:41:06.759690   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:41:06.759699   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:41:06.759722   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:41:06.759747   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:41:06.759767   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:41:06.759802   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:06.759831   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:41:06.759845   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:06.759857   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:41:06.760437   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:41:06.784701   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:41:06.806275   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:41:06.828117   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:41:06.849183   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0721 23:41:06.870264   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:41:06.892346   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:41:06.917113   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:41:06.965862   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:41:06.992952   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:41:07.013436   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:41:07.034226   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:41:07.048830   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:41:07.053979   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:41:07.063324   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.067182   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.067223   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:07.072273   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:41:07.081598   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:41:07.090660   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.094423   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.094457   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:41:07.099469   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:41:07.108948   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:41:07.118492   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.122330   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.122371   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:41:07.127548   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:41:07.137242   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:41:07.140900   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:41:07.140956   23196 kubeadm.go:392] StartCluster: {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:41:07.141049   23196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:41:07.141087   23196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:41:07.175295   23196 cri.go:89] found id: ""
	I0721 23:41:07.175365   23196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 23:41:07.184254   23196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 23:41:07.192907   23196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 23:41:07.201225   23196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 23:41:07.201246   23196 kubeadm.go:157] found existing configuration files:
	
	I0721 23:41:07.201287   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0721 23:41:07.209026   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 23:41:07.209073   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 23:41:07.217354   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0721 23:41:07.225210   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 23:41:07.225260   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 23:41:07.233308   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0721 23:41:07.241082   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 23:41:07.241131   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 23:41:07.249118   23196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0721 23:41:07.256727   23196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 23:41:07.256766   23196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 23:41:07.264848   23196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 23:41:07.482211   23196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 23:41:20.722699   23196 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0721 23:41:20.722753   23196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 23:41:20.722860   23196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 23:41:20.723003   23196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 23:41:20.723134   23196 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 23:41:20.723225   23196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 23:41:20.724887   23196 out.go:204]   - Generating certificates and keys ...
	I0721 23:41:20.724966   23196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 23:41:20.725021   23196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 23:41:20.725103   23196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0721 23:41:20.725173   23196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0721 23:41:20.725248   23196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0721 23:41:20.725323   23196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0721 23:41:20.725377   23196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0721 23:41:20.725471   23196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-564251 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I0721 23:41:20.725541   23196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0721 23:41:20.725646   23196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-564251 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I0721 23:41:20.725705   23196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0721 23:41:20.725761   23196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0721 23:41:20.725799   23196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0721 23:41:20.725853   23196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 23:41:20.725924   23196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 23:41:20.726003   23196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0721 23:41:20.726081   23196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 23:41:20.726136   23196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 23:41:20.726182   23196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 23:41:20.726246   23196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 23:41:20.726344   23196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 23:41:20.727838   23196 out.go:204]   - Booting up control plane ...
	I0721 23:41:20.727929   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 23:41:20.728019   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 23:41:20.728103   23196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 23:41:20.728250   23196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 23:41:20.728370   23196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 23:41:20.728410   23196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 23:41:20.728529   23196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0721 23:41:20.728606   23196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0721 23:41:20.728660   23196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00213497s
	I0721 23:41:20.728750   23196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0721 23:41:20.728831   23196 kubeadm.go:310] [api-check] The API server is healthy after 8.738902427s
	I0721 23:41:20.728961   23196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 23:41:20.729100   23196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 23:41:20.729368   23196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 23:41:20.729606   23196 kubeadm.go:310] [mark-control-plane] Marking the node ha-564251 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 23:41:20.729695   23196 kubeadm.go:310] [bootstrap-token] Using token: a27g5i.jpb7sxjvb5ai1hxv
	I0721 23:41:20.731146   23196 out.go:204]   - Configuring RBAC rules ...
	I0721 23:41:20.731263   23196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 23:41:20.731354   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 23:41:20.731480   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 23:41:20.731660   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 23:41:20.731814   23196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 23:41:20.731932   23196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 23:41:20.732084   23196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 23:41:20.732145   23196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 23:41:20.732214   23196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 23:41:20.732223   23196 kubeadm.go:310] 
	I0721 23:41:20.732303   23196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 23:41:20.732312   23196 kubeadm.go:310] 
	I0721 23:41:20.732420   23196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 23:41:20.732431   23196 kubeadm.go:310] 
	I0721 23:41:20.732479   23196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 23:41:20.732555   23196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 23:41:20.732623   23196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 23:41:20.732635   23196 kubeadm.go:310] 
	I0721 23:41:20.732680   23196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 23:41:20.732686   23196 kubeadm.go:310] 
	I0721 23:41:20.732725   23196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 23:41:20.732730   23196 kubeadm.go:310] 
	I0721 23:41:20.732772   23196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 23:41:20.732834   23196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 23:41:20.732890   23196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 23:41:20.732897   23196 kubeadm.go:310] 
	I0721 23:41:20.732984   23196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 23:41:20.733082   23196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 23:41:20.733093   23196 kubeadm.go:310] 
	I0721 23:41:20.733161   23196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a27g5i.jpb7sxjvb5ai1hxv \
	I0721 23:41:20.733246   23196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0721 23:41:20.733265   23196 kubeadm.go:310] 	--control-plane 
	I0721 23:41:20.733271   23196 kubeadm.go:310] 
	I0721 23:41:20.733353   23196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 23:41:20.733363   23196 kubeadm.go:310] 
	I0721 23:41:20.733433   23196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a27g5i.jpb7sxjvb5ai1hxv \
	I0721 23:41:20.733525   23196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0721 23:41:20.733536   23196 cni.go:84] Creating CNI manager for ""
	I0721 23:41:20.733544   23196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 23:41:20.735154   23196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0721 23:41:20.736326   23196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0721 23:41:20.741393   23196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0721 23:41:20.741411   23196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0721 23:41:20.761233   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0721 23:41:21.123036   23196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 23:41:21.123118   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:21.123118   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251 minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=true
	I0721 23:41:21.143343   23196 ops.go:34] apiserver oom_adj: -16
	I0721 23:41:21.275861   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:21.776812   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:22.276729   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:22.776731   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:23.276283   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:23.776558   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:24.276251   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:24.776540   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:25.275977   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:25.776341   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:26.276236   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:26.776231   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:27.276729   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:27.776448   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:28.275886   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:28.776781   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.276896   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.775991   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:41:29.863582   23196 kubeadm.go:1113] duration metric: took 8.740521148s to wait for elevateKubeSystemPrivileges
	I0721 23:41:29.863624   23196 kubeadm.go:394] duration metric: took 22.722672686s to StartCluster
	I0721 23:41:29.863643   23196 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:29.863734   23196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:41:29.864422   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:29.864676   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0721 23:41:29.864686   23196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:29.864710   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:41:29.864719   23196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0721 23:41:29.864789   23196 addons.go:69] Setting storage-provisioner=true in profile "ha-564251"
	I0721 23:41:29.864799   23196 addons.go:69] Setting default-storageclass=true in profile "ha-564251"
	I0721 23:41:29.864818   23196 addons.go:234] Setting addon storage-provisioner=true in "ha-564251"
	I0721 23:41:29.864836   23196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-564251"
	I0721 23:41:29.864847   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:29.864872   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:29.865305   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.865336   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.865305   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.865409   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.880647   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0721 23:41:29.880990   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0721 23:41:29.881121   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.881487   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.881649   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.881675   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.882032   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.882050   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.882053   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.882355   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.882595   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.882639   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.882658   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.884931   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:41:29.885289   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0721 23:41:29.885874   23196 cert_rotation.go:137] Starting client certificate rotation controller
	I0721 23:41:29.886108   23196 addons.go:234] Setting addon default-storageclass=true in "ha-564251"
	I0721 23:41:29.886158   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:29.886543   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.886582   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.898096   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0721 23:41:29.898528   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.899072   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.899094   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.899459   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.899650   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.901936   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:29.901985   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0721 23:41:29.902725   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.903198   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.903220   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.903544   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.904041   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:29.904067   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:29.904083   23196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 23:41:29.905493   23196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:41:29.905509   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 23:41:29.905528   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:29.908392   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.908744   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:29.908766   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.908907   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:29.909097   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:29.909254   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:29.909416   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:29.918993   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0721 23:41:29.919403   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:29.919823   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:29.919840   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:29.920108   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:29.920244   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:29.921577   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:29.921782   23196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 23:41:29.921797   23196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 23:41:29.921813   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:29.924296   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.924628   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:29.924656   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:29.924813   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:29.924988   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:29.925130   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:29.925315   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:29.980907   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0721 23:41:30.143350   23196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 23:41:30.170523   23196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:41:30.590713   23196 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0721 23:41:30.590799   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.590825   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.591134   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.591163   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.591176   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.591191   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.591203   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.591437   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.591451   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.591452   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.591562   23196 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0721 23:41:30.591572   23196 round_trippers.go:469] Request Headers:
	I0721 23:41:30.591583   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:41:30.591593   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:41:30.605336   23196 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0721 23:41:30.605901   23196 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0721 23:41:30.605917   23196 round_trippers.go:469] Request Headers:
	I0721 23:41:30.605928   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:41:30.605934   23196 round_trippers.go:473]     Content-Type: application/json
	I0721 23:41:30.605939   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:41:30.609173   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:41:30.609317   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.609331   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.609642   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.609671   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.609648   23196 main.go:141] libmachine: (ha-564251) DBG | Closing plugin on server side
	I0721 23:41:30.790742   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.790765   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.791045   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.791064   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.791074   23196 main.go:141] libmachine: Making call to close driver server
	I0721 23:41:30.791083   23196 main.go:141] libmachine: (ha-564251) Calling .Close
	I0721 23:41:30.791296   23196 main.go:141] libmachine: Successfully made call to close driver server
	I0721 23:41:30.791313   23196 main.go:141] libmachine: Making call to close connection to plugin binary
	I0721 23:41:30.792879   23196 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0721 23:41:30.794066   23196 addons.go:510] duration metric: took 929.343381ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0721 23:41:30.794097   23196 start.go:246] waiting for cluster config update ...
	I0721 23:41:30.794108   23196 start.go:255] writing updated cluster config ...
	I0721 23:41:30.795568   23196 out.go:177] 
	I0721 23:41:30.797219   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:30.797291   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:30.798811   23196 out.go:177] * Starting "ha-564251-m02" control-plane node in "ha-564251" cluster
	I0721 23:41:30.800195   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:41:30.800223   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:41:30.800316   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:41:30.800332   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:41:30.800437   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:30.800654   23196 start.go:360] acquireMachinesLock for ha-564251-m02: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:41:30.800720   23196 start.go:364] duration metric: took 40.272µs to acquireMachinesLock for "ha-564251-m02"
	I0721 23:41:30.800745   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:30.800853   23196 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0721 23:41:30.803086   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:41:30.803186   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:30.803212   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:30.817649   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0721 23:41:30.818109   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:30.818581   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:30.818663   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:30.818994   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:30.819173   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:30.819372   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:30.819533   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:41:30.819557   23196 client.go:168] LocalClient.Create starting
	I0721 23:41:30.819589   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:41:30.819616   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:41:30.819644   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:41:30.819692   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:41:30.819709   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:41:30.819719   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:41:30.819736   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:41:30.819743   23196 main.go:141] libmachine: (ha-564251-m02) Calling .PreCreateCheck
	I0721 23:41:30.819884   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:30.820207   23196 main.go:141] libmachine: Creating machine...
	I0721 23:41:30.820218   23196 main.go:141] libmachine: (ha-564251-m02) Calling .Create
	I0721 23:41:30.820349   23196 main.go:141] libmachine: (ha-564251-m02) Creating KVM machine...
	I0721 23:41:30.821455   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found existing default KVM network
	I0721 23:41:30.821652   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found existing private KVM network mk-ha-564251
	I0721 23:41:30.821778   23196 main.go:141] libmachine: (ha-564251-m02) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 ...
	I0721 23:41:30.821794   23196 main.go:141] libmachine: (ha-564251-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:41:30.821846   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:30.821778   23576 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:41:30.821914   23196 main.go:141] libmachine: (ha-564251-m02) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:41:31.043777   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.043643   23576 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa...
	I0721 23:41:31.084055   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.083910   23576 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/ha-564251-m02.rawdisk...
	I0721 23:41:31.084094   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Writing magic tar header
	I0721 23:41:31.084110   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Writing SSH key tar header
	I0721 23:41:31.084130   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:31.084055   23576 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 ...
	I0721 23:41:31.084198   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02
	I0721 23:41:31.084239   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:41:31.084254   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02 (perms=drwx------)
	I0721 23:41:31.084269   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:41:31.084281   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:41:31.084297   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:41:31.084308   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:41:31.084318   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:41:31.084335   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:41:31.084347   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:41:31.084358   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:41:31.084369   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Checking permissions on dir: /home
	I0721 23:41:31.084379   23196 main.go:141] libmachine: (ha-564251-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:41:31.084389   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Skipping /home - not owner
	I0721 23:41:31.084397   23196 main.go:141] libmachine: (ha-564251-m02) Creating domain...
	I0721 23:41:31.085259   23196 main.go:141] libmachine: (ha-564251-m02) define libvirt domain using xml: 
	I0721 23:41:31.085278   23196 main.go:141] libmachine: (ha-564251-m02) <domain type='kvm'>
	I0721 23:41:31.085310   23196 main.go:141] libmachine: (ha-564251-m02)   <name>ha-564251-m02</name>
	I0721 23:41:31.085348   23196 main.go:141] libmachine: (ha-564251-m02)   <memory unit='MiB'>2200</memory>
	I0721 23:41:31.085358   23196 main.go:141] libmachine: (ha-564251-m02)   <vcpu>2</vcpu>
	I0721 23:41:31.085367   23196 main.go:141] libmachine: (ha-564251-m02)   <features>
	I0721 23:41:31.085376   23196 main.go:141] libmachine: (ha-564251-m02)     <acpi/>
	I0721 23:41:31.085385   23196 main.go:141] libmachine: (ha-564251-m02)     <apic/>
	I0721 23:41:31.085395   23196 main.go:141] libmachine: (ha-564251-m02)     <pae/>
	I0721 23:41:31.085405   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085418   23196 main.go:141] libmachine: (ha-564251-m02)   </features>
	I0721 23:41:31.085433   23196 main.go:141] libmachine: (ha-564251-m02)   <cpu mode='host-passthrough'>
	I0721 23:41:31.085444   23196 main.go:141] libmachine: (ha-564251-m02)   
	I0721 23:41:31.085452   23196 main.go:141] libmachine: (ha-564251-m02)   </cpu>
	I0721 23:41:31.085463   23196 main.go:141] libmachine: (ha-564251-m02)   <os>
	I0721 23:41:31.085470   23196 main.go:141] libmachine: (ha-564251-m02)     <type>hvm</type>
	I0721 23:41:31.085480   23196 main.go:141] libmachine: (ha-564251-m02)     <boot dev='cdrom'/>
	I0721 23:41:31.085503   23196 main.go:141] libmachine: (ha-564251-m02)     <boot dev='hd'/>
	I0721 23:41:31.085515   23196 main.go:141] libmachine: (ha-564251-m02)     <bootmenu enable='no'/>
	I0721 23:41:31.085524   23196 main.go:141] libmachine: (ha-564251-m02)   </os>
	I0721 23:41:31.085530   23196 main.go:141] libmachine: (ha-564251-m02)   <devices>
	I0721 23:41:31.085543   23196 main.go:141] libmachine: (ha-564251-m02)     <disk type='file' device='cdrom'>
	I0721 23:41:31.085556   23196 main.go:141] libmachine: (ha-564251-m02)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/boot2docker.iso'/>
	I0721 23:41:31.085568   23196 main.go:141] libmachine: (ha-564251-m02)       <target dev='hdc' bus='scsi'/>
	I0721 23:41:31.085576   23196 main.go:141] libmachine: (ha-564251-m02)       <readonly/>
	I0721 23:41:31.085590   23196 main.go:141] libmachine: (ha-564251-m02)     </disk>
	I0721 23:41:31.085601   23196 main.go:141] libmachine: (ha-564251-m02)     <disk type='file' device='disk'>
	I0721 23:41:31.085615   23196 main.go:141] libmachine: (ha-564251-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:41:31.085626   23196 main.go:141] libmachine: (ha-564251-m02)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/ha-564251-m02.rawdisk'/>
	I0721 23:41:31.085638   23196 main.go:141] libmachine: (ha-564251-m02)       <target dev='hda' bus='virtio'/>
	I0721 23:41:31.085648   23196 main.go:141] libmachine: (ha-564251-m02)     </disk>
	I0721 23:41:31.085657   23196 main.go:141] libmachine: (ha-564251-m02)     <interface type='network'>
	I0721 23:41:31.085667   23196 main.go:141] libmachine: (ha-564251-m02)       <source network='mk-ha-564251'/>
	I0721 23:41:31.085674   23196 main.go:141] libmachine: (ha-564251-m02)       <model type='virtio'/>
	I0721 23:41:31.085683   23196 main.go:141] libmachine: (ha-564251-m02)     </interface>
	I0721 23:41:31.085690   23196 main.go:141] libmachine: (ha-564251-m02)     <interface type='network'>
	I0721 23:41:31.085704   23196 main.go:141] libmachine: (ha-564251-m02)       <source network='default'/>
	I0721 23:41:31.085715   23196 main.go:141] libmachine: (ha-564251-m02)       <model type='virtio'/>
	I0721 23:41:31.085725   23196 main.go:141] libmachine: (ha-564251-m02)     </interface>
	I0721 23:41:31.085733   23196 main.go:141] libmachine: (ha-564251-m02)     <serial type='pty'>
	I0721 23:41:31.085743   23196 main.go:141] libmachine: (ha-564251-m02)       <target port='0'/>
	I0721 23:41:31.085751   23196 main.go:141] libmachine: (ha-564251-m02)     </serial>
	I0721 23:41:31.085759   23196 main.go:141] libmachine: (ha-564251-m02)     <console type='pty'>
	I0721 23:41:31.085771   23196 main.go:141] libmachine: (ha-564251-m02)       <target type='serial' port='0'/>
	I0721 23:41:31.085781   23196 main.go:141] libmachine: (ha-564251-m02)     </console>
	I0721 23:41:31.085805   23196 main.go:141] libmachine: (ha-564251-m02)     <rng model='virtio'>
	I0721 23:41:31.085823   23196 main.go:141] libmachine: (ha-564251-m02)       <backend model='random'>/dev/random</backend>
	I0721 23:41:31.085836   23196 main.go:141] libmachine: (ha-564251-m02)     </rng>
	I0721 23:41:31.085846   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085854   23196 main.go:141] libmachine: (ha-564251-m02)     
	I0721 23:41:31.085864   23196 main.go:141] libmachine: (ha-564251-m02)   </devices>
	I0721 23:41:31.085872   23196 main.go:141] libmachine: (ha-564251-m02) </domain>
	I0721 23:41:31.085881   23196 main.go:141] libmachine: (ha-564251-m02) 
	I0721 23:41:31.092166   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:91:eb:c9 in network default
	I0721 23:41:31.092648   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring networks are active...
	I0721 23:41:31.092671   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:31.093348   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring network default is active
	I0721 23:41:31.093652   23196 main.go:141] libmachine: (ha-564251-m02) Ensuring network mk-ha-564251 is active
	I0721 23:41:31.093972   23196 main.go:141] libmachine: (ha-564251-m02) Getting domain xml...
	I0721 23:41:31.094686   23196 main.go:141] libmachine: (ha-564251-m02) Creating domain...
	I0721 23:41:32.308261   23196 main.go:141] libmachine: (ha-564251-m02) Waiting to get IP...
	I0721 23:41:32.309190   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.309536   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.309560   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.309517   23576 retry.go:31] will retry after 279.941039ms: waiting for machine to come up
	I0721 23:41:32.590998   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.591342   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.591371   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.591289   23576 retry.go:31] will retry after 273.960435ms: waiting for machine to come up
	I0721 23:41:32.866931   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:32.867402   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:32.867426   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:32.867369   23576 retry.go:31] will retry after 384.003174ms: waiting for machine to come up
	I0721 23:41:33.252760   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:33.253210   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:33.253232   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:33.253160   23576 retry.go:31] will retry after 437.950795ms: waiting for machine to come up
	I0721 23:41:33.692821   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:33.693233   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:33.693258   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:33.693180   23576 retry.go:31] will retry after 658.15435ms: waiting for machine to come up
	I0721 23:41:34.353216   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:34.353605   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:34.353628   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:34.353550   23576 retry.go:31] will retry after 893.609942ms: waiting for machine to come up
	I0721 23:41:35.248776   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:35.249208   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:35.249231   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:35.249177   23576 retry.go:31] will retry after 1.020462835s: waiting for machine to come up
	I0721 23:41:36.271363   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:36.271841   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:36.271876   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:36.271785   23576 retry.go:31] will retry after 1.308791009s: waiting for machine to come up
	I0721 23:41:37.581782   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:37.582248   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:37.582278   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:37.582175   23576 retry.go:31] will retry after 1.458259843s: waiting for machine to come up
	I0721 23:41:39.042669   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:39.043011   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:39.043055   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:39.042963   23576 retry.go:31] will retry after 1.628790411s: waiting for machine to come up
	I0721 23:41:40.673608   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:40.674113   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:40.674138   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:40.674037   23576 retry.go:31] will retry after 2.891000365s: waiting for machine to come up
	I0721 23:41:43.566289   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:43.566794   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:43.566820   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:43.566748   23576 retry.go:31] will retry after 3.017497145s: waiting for machine to come up
	I0721 23:41:46.585567   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:46.585983   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find current IP address of domain ha-564251-m02 in network mk-ha-564251
	I0721 23:41:46.586010   23196 main.go:141] libmachine: (ha-564251-m02) DBG | I0721 23:41:46.585943   23576 retry.go:31] will retry after 4.417647061s: waiting for machine to come up
	I0721 23:41:51.005071   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.005507   23196 main.go:141] libmachine: (ha-564251-m02) Found IP for machine: 192.168.39.202
	I0721 23:41:51.005535   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has current primary IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.005544   23196 main.go:141] libmachine: (ha-564251-m02) Reserving static IP address...
	I0721 23:41:51.005920   23196 main.go:141] libmachine: (ha-564251-m02) DBG | unable to find host DHCP lease matching {name: "ha-564251-m02", mac: "52:54:00:38:f8:82", ip: "192.168.39.202"} in network mk-ha-564251
	I0721 23:41:51.075991   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Getting to WaitForSSH function...
	I0721 23:41:51.076035   23196 main.go:141] libmachine: (ha-564251-m02) Reserved static IP address: 192.168.39.202
	I0721 23:41:51.076050   23196 main.go:141] libmachine: (ha-564251-m02) Waiting for SSH to be available...
	I0721 23:41:51.078414   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.078825   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.078855   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.078949   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using SSH client type: external
	I0721 23:41:51.078970   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa (-rw-------)
	I0721 23:41:51.078995   23196 main.go:141] libmachine: (ha-564251-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:41:51.079009   23196 main.go:141] libmachine: (ha-564251-m02) DBG | About to run SSH command:
	I0721 23:41:51.079024   23196 main.go:141] libmachine: (ha-564251-m02) DBG | exit 0
	I0721 23:41:51.206770   23196 main.go:141] libmachine: (ha-564251-m02) DBG | SSH cmd err, output: <nil>: 
	I0721 23:41:51.206977   23196 main.go:141] libmachine: (ha-564251-m02) KVM machine creation complete!
	I0721 23:41:51.207321   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:51.207919   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:51.208096   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:51.208248   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:41:51.208265   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:41:51.209635   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:41:51.209650   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:41:51.209664   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:41:51.209676   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.212146   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.212578   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.212603   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.212780   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.212942   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.213098   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.213216   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.213384   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.213576   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.213588   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:41:51.325723   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:51.325745   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:41:51.325773   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.328472   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.328853   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.328881   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.328963   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.329128   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.329296   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.329445   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.329591   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.329767   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.329781   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:41:51.439120   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:41:51.439200   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:41:51.439211   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:41:51.439224   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.439507   23196 buildroot.go:166] provisioning hostname "ha-564251-m02"
	I0721 23:41:51.439529   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.439725   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.442124   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.442501   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.442536   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.442671   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.442847   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.443009   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.443198   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.443385   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.443600   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.443613   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251-m02 && echo "ha-564251-m02" | sudo tee /etc/hostname
	I0721 23:41:51.563554   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251-m02
	
	I0721 23:41:51.563586   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.566345   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.566765   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.566793   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.566949   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.567120   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.567292   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.567459   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.567583   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.567731   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.567746   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:41:51.686398   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:41:51.686425   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:41:51.686443   23196 buildroot.go:174] setting up certificates
	I0721 23:41:51.686451   23196 provision.go:84] configureAuth start
	I0721 23:41:51.686460   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetMachineName
	I0721 23:41:51.686809   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:51.689485   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.689782   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.689809   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.690002   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.692216   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.692584   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.692610   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.692745   23196 provision.go:143] copyHostCerts
	I0721 23:41:51.692783   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:51.692812   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:41:51.692820   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:41:51.692884   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:41:51.692964   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:51.692981   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:41:51.692987   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:41:51.693010   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:41:51.693061   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:51.693077   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:41:51.693081   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:41:51.693100   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:41:51.693156   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251-m02 san=[127.0.0.1 192.168.39.202 ha-564251-m02 localhost minikube]
	I0721 23:41:51.755558   23196 provision.go:177] copyRemoteCerts
	I0721 23:41:51.755608   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:41:51.755634   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.758285   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.758634   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.758658   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.758847   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.759014   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.759144   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.759245   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:51.844033   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:41:51.844108   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:41:51.867176   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:41:51.867228   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:41:51.888974   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:41:51.889030   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:41:51.910077   23196 provision.go:87] duration metric: took 223.613935ms to configureAuth
	I0721 23:41:51.910101   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:41:51.910281   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:51.910377   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:51.913029   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.913307   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:51.913334   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:51.913488   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:51.913621   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.913718   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:51.913790   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:51.913942   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:51.914083   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:51.914095   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:41:52.180201   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:41:52.180229   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:41:52.180238   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetURL
	I0721 23:41:52.181546   23196 main.go:141] libmachine: (ha-564251-m02) DBG | Using libvirt version 6000000
	I0721 23:41:52.183518   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.183824   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.183845   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.183983   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:41:52.184001   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:41:52.184013   23196 client.go:171] duration metric: took 21.364444929s to LocalClient.Create
	I0721 23:41:52.184042   23196 start.go:167] duration metric: took 21.364519572s to libmachine.API.Create "ha-564251"
	I0721 23:41:52.184054   23196 start.go:293] postStartSetup for "ha-564251-m02" (driver="kvm2")
	I0721 23:41:52.184066   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:41:52.184093   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.184318   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:41:52.184338   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.186492   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.186805   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.186873   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.186944   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.187195   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.187349   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.187486   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.272188   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:41:52.275999   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:41:52.276022   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:41:52.276086   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:41:52.276168   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:41:52.276179   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:41:52.276279   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:41:52.284945   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:52.306026   23196 start.go:296] duration metric: took 121.960075ms for postStartSetup
	I0721 23:41:52.306075   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetConfigRaw
	I0721 23:41:52.306683   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:52.309314   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.309643   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.309671   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.309870   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:41:52.310034   23196 start.go:128] duration metric: took 21.509168801s to createHost
	I0721 23:41:52.310055   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.312372   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.312732   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.312758   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.312846   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.313030   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.313176   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.313288   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.313451   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:41:52.313603   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0721 23:41:52.313613   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:41:52.422971   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605312.384668670
	
	I0721 23:41:52.422996   23196 fix.go:216] guest clock: 1721605312.384668670
	I0721 23:41:52.423004   23196 fix.go:229] Guest: 2024-07-21 23:41:52.38466867 +0000 UTC Remote: 2024-07-21 23:41:52.310044935 +0000 UTC m=+71.796673073 (delta=74.623735ms)
	I0721 23:41:52.423016   23196 fix.go:200] guest clock delta is within tolerance: 74.623735ms
	I0721 23:41:52.423021   23196 start.go:83] releasing machines lock for "ha-564251-m02", held for 21.622289193s
	I0721 23:41:52.423039   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.423338   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:52.425783   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.426046   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.426069   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.428421   23196 out.go:177] * Found network options:
	I0721 23:41:52.429810   23196 out.go:177]   - NO_PROXY=192.168.39.91
	W0721 23:41:52.431059   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:41:52.431089   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431611   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431829   23196 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:41:52.431925   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:41:52.431960   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	W0721 23:41:52.432043   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:41:52.432125   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:41:52.432148   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:41:52.434775   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435025   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435195   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.435224   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435352   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.435461   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:52.435486   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:52.435537   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.435607   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:41:52.435675   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.435759   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:41:52.435823   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.435919   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:41:52.436051   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:41:52.668235   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:41:52.673505   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:41:52.673555   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:41:52.689044   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:41:52.689060   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:41:52.689109   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:41:52.703951   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:41:52.717029   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:41:52.717089   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:41:52.730341   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:41:52.743683   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:41:52.852147   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:41:52.991439   23196 docker.go:233] disabling docker service ...
	I0721 23:41:52.991501   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:41:53.005176   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:41:53.017426   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:41:53.149184   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:41:53.253962   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:41:53.266638   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:41:53.285081   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:41:53.285147   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.294456   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:41:53.294518   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.304023   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.313431   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.323972   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:41:53.333492   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.342713   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.358065   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:41:53.367571   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:41:53.376039   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:41:53.376091   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:41:53.387243   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:41:53.396362   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:53.500320   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:41:53.631312   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:41:53.631382   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:41:53.635842   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:41:53.635905   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:41:53.639388   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:41:53.680490   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:41:53.680577   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:53.706998   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:41:53.735897   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:41:53.737231   23196 out.go:177]   - env NO_PROXY=192.168.39.91
	I0721 23:41:53.738546   23196 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:41:53.741241   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:53.741622   23196 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:41:44 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:41:53.741649   23196 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:41:53.741830   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:41:53.745640   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:53.757594   23196 mustload.go:65] Loading cluster: ha-564251
	I0721 23:41:53.757751   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:41:53.757983   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:53.758015   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:53.773453   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0721 23:41:53.773841   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:53.774308   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:53.774330   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:53.774705   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:53.774900   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:41:53.776562   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:53.776847   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:53.776888   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:53.791078   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0721 23:41:53.791437   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:53.791839   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:53.791859   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:53.792147   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:53.792495   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:53.792646   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.202
	I0721 23:41:53.792658   23196 certs.go:194] generating shared ca certs ...
	I0721 23:41:53.792671   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:53.792778   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:41:53.792812   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:41:53.792820   23196 certs.go:256] generating profile certs ...
	I0721 23:41:53.792910   23196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:41:53.792937   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf
	I0721 23:41:53.792948   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.254]
	I0721 23:41:54.020469   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf ...
	I0721 23:41:54.020494   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf: {Name:mk0d4d16dfd271a385f6ab767cfa09f740f8d565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:54.020652   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf ...
	I0721 23:41:54.020665   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf: {Name:mk96eec0984ded953402c5b044b0f82745c535b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:41:54.020731   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.c0c593bf -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:41:54.020855   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.c0c593bf -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:41:54.020970   23196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:41:54.020985   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:41:54.020997   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:41:54.021010   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:41:54.021023   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:41:54.021035   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:41:54.021048   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:41:54.021059   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:41:54.021071   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:41:54.021111   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:41:54.021136   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:41:54.021145   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:41:54.021164   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:41:54.021184   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:41:54.021204   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:41:54.021238   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:41:54.021264   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.021277   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.021290   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.021319   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:54.023945   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:54.024508   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:54.024538   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:54.024735   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:54.024946   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:54.025128   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:54.025257   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:54.094999   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0721 23:41:54.099479   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0721 23:41:54.109463   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0721 23:41:54.113544   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0721 23:41:54.122906   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0721 23:41:54.126673   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0721 23:41:54.136429   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0721 23:41:54.139970   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0721 23:41:54.149258   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0721 23:41:54.152853   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0721 23:41:54.161904   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0721 23:41:54.165554   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0721 23:41:54.174669   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:41:54.199080   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:41:54.223014   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:41:54.246728   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:41:54.270483   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0721 23:41:54.291692   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:41:54.312692   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:41:54.333545   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:41:54.354460   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:41:54.375476   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:41:54.396228   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:41:54.417338   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0721 23:41:54.433622   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0721 23:41:54.450100   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0721 23:41:54.466106   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0721 23:41:54.482430   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0721 23:41:54.498541   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0721 23:41:54.513446   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0721 23:41:54.528753   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:41:54.533953   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:41:54.543439   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.547394   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.547436   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:41:54.552691   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:41:54.562035   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:41:54.572210   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.575964   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.576016   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:41:54.580923   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:41:54.590450   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:41:54.600593   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.604659   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.604693   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:41:54.609777   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:41:54.620009   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:41:54.623546   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:41:54.623592   23196 kubeadm.go:934] updating node {m02 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0721 23:41:54.623672   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:41:54.623695   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:41:54.623726   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:41:54.646367   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:41:54.646418   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:41:54.646459   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:54.658093   23196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0721 23:41:54.658134   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0721 23:41:54.666905   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0721 23:41:54.666929   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:41:54.666970   23196 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0721 23:41:54.667001   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:41:54.667008   23196 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0721 23:41:54.670824   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0721 23:41:54.670853   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0721 23:41:55.493266   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:41:55.493355   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:41:55.497798   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0721 23:41:55.497827   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0721 23:41:55.666177   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:41:55.699325   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:41:55.699430   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:41:55.711440   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0721 23:41:55.711478   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0721 23:41:56.088381   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0721 23:41:56.097283   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0721 23:41:56.112806   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:41:56.127525   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:41:56.142595   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:41:56.145949   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:41:56.156798   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:41:56.258151   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:41:56.273277   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:41:56.273786   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:41:56.273847   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:41:56.291329   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37707
	I0721 23:41:56.291911   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:41:56.292375   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:41:56.292395   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:41:56.292729   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:41:56.292917   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:41:56.293055   23196 start.go:317] joinCluster: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:41:56.293140   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0721 23:41:56.293155   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:41:56.296437   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:56.296935   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:41:56.296965   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:41:56.297153   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:41:56.297332   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:41:56.297500   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:41:56.297629   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:41:56.440022   23196 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:41:56.440065   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 08e5ji.aajvcalhdut83cxr --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0721 23:42:19.196999   23196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 08e5ji.aajvcalhdut83cxr --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (22.756910365s)
	I0721 23:42:19.197038   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0721 23:42:19.740638   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251-m02 minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=false
	I0721 23:42:19.851899   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-564251-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0721 23:42:19.983706   23196 start.go:319] duration metric: took 23.690643373s to joinCluster
	I0721 23:42:19.983780   23196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:42:19.984067   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:42:19.985799   23196 out.go:177] * Verifying Kubernetes components...
	I0721 23:42:19.986844   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:42:20.243378   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:42:20.316427   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:42:20.316792   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0721 23:42:20.316877   23196 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.91:8443
	I0721 23:42:20.317156   23196 node_ready.go:35] waiting up to 6m0s for node "ha-564251-m02" to be "Ready" ...
	I0721 23:42:20.317269   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:20.317282   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:20.317292   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:20.317296   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:20.336442   23196 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0721 23:42:20.818326   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:20.818348   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:20.818361   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:20.818367   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:20.821723   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:21.317491   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:21.317510   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:21.317518   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:21.317521   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:21.322410   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:21.818257   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:21.818276   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:21.818284   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:21.818288   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:21.821223   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:22.318085   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:22.318112   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:22.318121   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:22.318135   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:22.321462   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:22.322038   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:22.817369   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:22.817403   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:22.817411   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:22.817414   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:22.821429   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:23.317419   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:23.317438   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:23.317446   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:23.317449   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:23.320648   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:23.818236   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:23.818261   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:23.818273   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:23.818281   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:23.821320   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:24.318177   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:24.318198   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:24.318206   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:24.318212   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:24.321794   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:24.322590   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:24.817928   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:24.817953   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:24.817964   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:24.817970   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:24.822397   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:25.317695   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:25.317717   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:25.317727   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:25.317733   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:25.320800   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:25.818263   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:25.818287   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:25.818305   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:25.818310   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:25.821480   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:26.317875   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:26.317899   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:26.317910   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:26.317915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:26.321277   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:26.818278   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:26.818296   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:26.818303   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:26.818306   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:26.822817   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:26.823289   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:27.317434   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:27.317456   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:27.317463   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:27.317467   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:27.320759   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:27.817671   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:27.817690   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:27.817698   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:27.817703   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:27.820392   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:28.317755   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:28.317777   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:28.317785   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:28.317789   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:28.320846   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:28.818065   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:28.818083   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:28.818091   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:28.818095   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:28.821179   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:29.318242   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:29.318268   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:29.318279   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:29.318287   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:29.356069   23196 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0721 23:42:29.356708   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:29.817972   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:29.817995   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:29.818003   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:29.818009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:29.820909   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:30.317373   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:30.317396   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:30.317404   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:30.317408   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:30.320266   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:30.817493   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:30.817513   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:30.817522   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:30.817526   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:30.820482   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:31.317562   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:31.317597   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:31.317608   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:31.317613   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:31.321817   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:31.817643   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:31.817666   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:31.817677   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:31.817683   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:31.820508   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:31.821098   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:32.317456   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:32.317476   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:32.317484   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:32.317488   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:32.322017   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:42:32.818057   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:32.818076   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:32.818084   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:32.818089   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:32.821032   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:33.318322   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:33.318349   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:33.318359   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:33.318366   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:33.321755   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:33.817734   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:33.817751   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:33.817760   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:33.817763   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:33.821052   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:33.821766   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:34.318206   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:34.318226   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:34.318233   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:34.318237   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:34.321495   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:34.817545   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:34.817579   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:34.817590   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:34.817595   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:34.820807   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:35.317762   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:35.317787   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:35.317798   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:35.317803   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:35.320872   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:35.818257   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:35.818274   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:35.818282   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:35.818287   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:35.821211   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:35.821933   23196 node_ready.go:53] node "ha-564251-m02" has status "Ready":"False"
	I0721 23:42:36.318144   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:36.318164   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:36.318171   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:36.318176   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:36.321896   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:36.817764   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:36.817784   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:36.817793   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:36.817797   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:36.821184   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:37.318365   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.318395   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.318407   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.318417   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.322141   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:37.818241   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.818261   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.818271   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.818275   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.821251   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.821828   23196 node_ready.go:49] node "ha-564251-m02" has status "Ready":"True"
	I0721 23:42:37.821851   23196 node_ready.go:38] duration metric: took 17.504666665s for node "ha-564251-m02" to be "Ready" ...
	I0721 23:42:37.821862   23196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:42:37.821933   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:37.821945   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.821956   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.821966   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.831685   23196 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0721 23:42:37.837771   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.837841   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bsbzk
	I0721 23:42:37.837849   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.837857   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.837862   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.840272   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.840792   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.840805   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.840812   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.840816   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.843255   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.843999   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.844022   23196 pod_ready.go:81] duration metric: took 6.228906ms for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.844034   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.844092   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f4lqn
	I0721 23:42:37.844100   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.844107   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.844111   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.846712   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.847698   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.847717   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.847727   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.847732   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.849786   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.850537   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.850555   23196 pod_ready.go:81] duration metric: took 6.509196ms for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.850570   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.850638   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251
	I0721 23:42:37.850649   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.850659   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.850665   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.852494   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.853048   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:37.853064   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.853074   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.853079   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.855065   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.855808   23196 pod_ready.go:92] pod "etcd-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.855823   23196 pod_ready.go:81] duration metric: took 5.24199ms for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.855833   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.855886   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m02
	I0721 23:42:37.855895   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.855905   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.855915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.857862   23196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0721 23:42:37.858236   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:37.858248   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:37.858256   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:37.858263   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:37.860300   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:37.860668   23196 pod_ready.go:92] pod "etcd-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:37.860682   23196 pod_ready.go:81] duration metric: took 4.841194ms for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:37.860697   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.019092   23196 request.go:629] Waited for 158.334528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:42:38.019148   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:42:38.019153   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.019160   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.019164   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.022158   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.219035   23196 request.go:629] Waited for 196.175145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:38.219084   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:38.219090   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.219098   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.219103   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.221664   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.222235   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:38.222261   23196 pod_ready.go:81] duration metric: took 361.557372ms for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.222285   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.418315   23196 request.go:629] Waited for 195.950584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:42:38.418385   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:42:38.418390   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.418398   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.418403   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.421696   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:38.618798   23196 request.go:629] Waited for 196.383684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:38.618866   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:38.618871   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.618879   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.618882   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.621356   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:38.621824   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:38.621842   23196 pod_ready.go:81] duration metric: took 399.547546ms for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.621852   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:38.818875   23196 request.go:629] Waited for 196.950973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:42:38.818937   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:42:38.818945   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:38.818954   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:38.818959   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:38.822032   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.018894   23196 request.go:629] Waited for 196.348282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:39.018978   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:39.018988   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.018993   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.018996   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.022059   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.022723   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.022743   23196 pod_ready.go:81] duration metric: took 400.884512ms for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.022755   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.218690   23196 request.go:629] Waited for 195.869375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:42:39.218762   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:42:39.218768   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.218783   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.218791   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.221688   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:39.418697   23196 request.go:629] Waited for 196.395764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.418770   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.418777   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.418789   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.418799   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.422125   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.422933   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.422954   23196 pod_ready.go:81] duration metric: took 400.191219ms for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.422965   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.619086   23196 request.go:629] Waited for 196.046312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:42:39.619141   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:42:39.619147   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.619161   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.619166   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.622167   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:39.819218   23196 request.go:629] Waited for 196.352929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.819278   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:39.819283   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:39.819290   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:39.819294   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:39.822488   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:39.822925   23196 pod_ready.go:92] pod "kube-proxy-8c6vn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:39.822941   23196 pod_ready.go:81] duration metric: took 399.970562ms for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:39.822953   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.019101   23196 request.go:629] Waited for 196.083444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:42:40.019154   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:42:40.019162   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.019169   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.019175   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.022507   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:40.218320   23196 request.go:629] Waited for 195.279025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.218399   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.218405   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.218412   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.218416   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.221318   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.221883   23196 pod_ready.go:92] pod "kube-proxy-srpl8" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:40.221903   23196 pod_ready.go:81] duration metric: took 398.939079ms for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.221912   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.418974   23196 request.go:629] Waited for 196.993765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:42:40.419033   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:42:40.419037   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.419045   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.419048   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.422045   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.618865   23196 request.go:629] Waited for 196.30454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.618925   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:42:40.618930   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.618938   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.618942   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.621851   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:40.622454   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:40.622473   23196 pod_ready.go:81] duration metric: took 400.554697ms for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.622486   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:40.818777   23196 request.go:629] Waited for 196.209908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:42:40.818841   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:42:40.818846   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:40.818852   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:40.818858   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:40.821719   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.018703   23196 request.go:629] Waited for 196.316562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:41.018752   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:42:41.018757   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.018765   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.018769   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.021756   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.022313   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:42:41.022331   23196 pod_ready.go:81] duration metric: took 399.837433ms for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:42:41.022341   23196 pod_ready.go:38] duration metric: took 3.200465942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:42:41.022357   23196 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:42:41.022414   23196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:42:41.039081   23196 api_server.go:72] duration metric: took 21.055262783s to wait for apiserver process to appear ...
	I0721 23:42:41.039099   23196 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:42:41.039115   23196 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0721 23:42:41.043473   23196 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0721 23:42:41.043527   23196 round_trippers.go:463] GET https://192.168.39.91:8443/version
	I0721 23:42:41.043532   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.043540   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.043545   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.044552   23196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0721 23:42:41.044631   23196 api_server.go:141] control plane version: v1.30.3
	I0721 23:42:41.044646   23196 api_server.go:131] duration metric: took 5.540863ms to wait for apiserver health ...
	I0721 23:42:41.044652   23196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:42:41.219082   23196 request.go:629] Waited for 174.361325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.219145   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.219153   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.219162   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.219171   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.224530   23196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0721 23:42:41.228521   23196 system_pods.go:59] 17 kube-system pods found
	I0721 23:42:41.228549   23196 system_pods.go:61] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:42:41.228555   23196 system_pods.go:61] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:42:41.228558   23196 system_pods.go:61] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:42:41.228561   23196 system_pods.go:61] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:42:41.228564   23196 system_pods.go:61] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:42:41.228567   23196 system_pods.go:61] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:42:41.228572   23196 system_pods.go:61] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:42:41.228575   23196 system_pods.go:61] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:42:41.228578   23196 system_pods.go:61] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:42:41.228581   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:42:41.228584   23196 system_pods.go:61] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:42:41.228586   23196 system_pods.go:61] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:42:41.228589   23196 system_pods.go:61] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:42:41.228592   23196 system_pods.go:61] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:42:41.228596   23196 system_pods.go:61] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:42:41.228599   23196 system_pods.go:61] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:42:41.228602   23196 system_pods.go:61] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:42:41.228607   23196 system_pods.go:74] duration metric: took 183.949996ms to wait for pod list to return data ...
	I0721 23:42:41.228615   23196 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:42:41.418917   23196 request.go:629] Waited for 190.227355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:42:41.418994   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:42:41.419003   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.419015   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.419026   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.422128   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:42:41.422375   23196 default_sa.go:45] found service account: "default"
	I0721 23:42:41.422390   23196 default_sa.go:55] duration metric: took 193.76933ms for default service account to be created ...
	I0721 23:42:41.422397   23196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:42:41.618838   23196 request.go:629] Waited for 196.378448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.618890   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:42:41.618901   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.618914   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.618918   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.625681   23196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0721 23:42:41.630548   23196 system_pods.go:86] 17 kube-system pods found
	I0721 23:42:41.630570   23196 system_pods.go:89] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:42:41.630575   23196 system_pods.go:89] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:42:41.630580   23196 system_pods.go:89] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:42:41.630583   23196 system_pods.go:89] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:42:41.630588   23196 system_pods.go:89] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:42:41.630591   23196 system_pods.go:89] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:42:41.630596   23196 system_pods.go:89] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:42:41.630618   23196 system_pods.go:89] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:42:41.630625   23196 system_pods.go:89] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:42:41.630637   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:42:41.630644   23196 system_pods.go:89] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:42:41.630651   23196 system_pods.go:89] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:42:41.630655   23196 system_pods.go:89] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:42:41.630660   23196 system_pods.go:89] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:42:41.630664   23196 system_pods.go:89] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:42:41.630668   23196 system_pods.go:89] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:42:41.630671   23196 system_pods.go:89] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:42:41.630678   23196 system_pods.go:126] duration metric: took 208.276125ms to wait for k8s-apps to be running ...
	I0721 23:42:41.630688   23196 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:42:41.630736   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:42:41.645823   23196 system_svc.go:56] duration metric: took 15.126226ms WaitForService to wait for kubelet
	I0721 23:42:41.645852   23196 kubeadm.go:582] duration metric: took 21.662036695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:42:41.645871   23196 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:42:41.818236   23196 request.go:629] Waited for 172.279493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes
	I0721 23:42:41.818308   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes
	I0721 23:42:41.818316   23196 round_trippers.go:469] Request Headers:
	I0721 23:42:41.818330   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:42:41.818340   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:42:41.821351   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:42:41.822193   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:42:41.822215   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:42:41.822226   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:42:41.822229   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:42:41.822236   23196 node_conditions.go:105] duration metric: took 176.359038ms to run NodePressure ...
	I0721 23:42:41.822249   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:42:41.822275   23196 start.go:255] writing updated cluster config ...
	I0721 23:42:41.823794   23196 out.go:177] 
	I0721 23:42:41.825017   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:42:41.825098   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:42:41.826638   23196 out.go:177] * Starting "ha-564251-m03" control-plane node in "ha-564251" cluster
	I0721 23:42:41.827867   23196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:42:41.827890   23196 cache.go:56] Caching tarball of preloaded images
	I0721 23:42:41.827972   23196 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:42:41.827982   23196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:42:41.828071   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:42:41.828233   23196 start.go:360] acquireMachinesLock for ha-564251-m03: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:42:41.828272   23196 start.go:364] duration metric: took 22.9µs to acquireMachinesLock for "ha-564251-m03"
	I0721 23:42:41.828292   23196 start.go:93] Provisioning new machine with config: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:42:41.828373   23196 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0721 23:42:41.829697   23196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0721 23:42:41.829778   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:42:41.829810   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:42:41.848164   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0721 23:42:41.848575   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:42:41.849081   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:42:41.849103   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:42:41.849379   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:42:41.849650   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:42:41.849794   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:42:41.849971   23196 start.go:159] libmachine.API.Create for "ha-564251" (driver="kvm2")
	I0721 23:42:41.850001   23196 client.go:168] LocalClient.Create starting
	I0721 23:42:41.850035   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0721 23:42:41.850072   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:42:41.850096   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:42:41.850160   23196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0721 23:42:41.850185   23196 main.go:141] libmachine: Decoding PEM data...
	I0721 23:42:41.850200   23196 main.go:141] libmachine: Parsing certificate...
	I0721 23:42:41.850226   23196 main.go:141] libmachine: Running pre-create checks...
	I0721 23:42:41.850237   23196 main.go:141] libmachine: (ha-564251-m03) Calling .PreCreateCheck
	I0721 23:42:41.850384   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:42:41.850738   23196 main.go:141] libmachine: Creating machine...
	I0721 23:42:41.850753   23196 main.go:141] libmachine: (ha-564251-m03) Calling .Create
	I0721 23:42:41.850914   23196 main.go:141] libmachine: (ha-564251-m03) Creating KVM machine...
	I0721 23:42:41.852185   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found existing default KVM network
	I0721 23:42:41.852347   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found existing private KVM network mk-ha-564251
	I0721 23:42:41.852434   23196 main.go:141] libmachine: (ha-564251-m03) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 ...
	I0721 23:42:41.852451   23196 main.go:141] libmachine: (ha-564251-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:42:41.852505   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:41.852428   23971 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:42:41.852612   23196 main.go:141] libmachine: (ha-564251-m03) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:42:42.078170   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.078041   23971 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa...
	I0721 23:42:42.263096   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.262983   23971 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/ha-564251-m03.rawdisk...
	I0721 23:42:42.263131   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Writing magic tar header
	I0721 23:42:42.263145   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Writing SSH key tar header
	I0721 23:42:42.263156   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:42.263093   23971 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 ...
	I0721 23:42:42.263235   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03
	I0721 23:42:42.263265   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0721 23:42:42.263277   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03 (perms=drwx------)
	I0721 23:42:42.263288   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0721 23:42:42.263296   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0721 23:42:42.263307   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0721 23:42:42.263319   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0721 23:42:42.263334   23196 main.go:141] libmachine: (ha-564251-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0721 23:42:42.263346   23196 main.go:141] libmachine: (ha-564251-m03) Creating domain...
	I0721 23:42:42.263363   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:42:42.263377   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0721 23:42:42.263385   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0721 23:42:42.263394   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home/jenkins
	I0721 23:42:42.263407   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Checking permissions on dir: /home
	I0721 23:42:42.263423   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Skipping /home - not owner
	I0721 23:42:42.264408   23196 main.go:141] libmachine: (ha-564251-m03) define libvirt domain using xml: 
	I0721 23:42:42.264426   23196 main.go:141] libmachine: (ha-564251-m03) <domain type='kvm'>
	I0721 23:42:42.264434   23196 main.go:141] libmachine: (ha-564251-m03)   <name>ha-564251-m03</name>
	I0721 23:42:42.264442   23196 main.go:141] libmachine: (ha-564251-m03)   <memory unit='MiB'>2200</memory>
	I0721 23:42:42.264448   23196 main.go:141] libmachine: (ha-564251-m03)   <vcpu>2</vcpu>
	I0721 23:42:42.264453   23196 main.go:141] libmachine: (ha-564251-m03)   <features>
	I0721 23:42:42.264466   23196 main.go:141] libmachine: (ha-564251-m03)     <acpi/>
	I0721 23:42:42.264477   23196 main.go:141] libmachine: (ha-564251-m03)     <apic/>
	I0721 23:42:42.264486   23196 main.go:141] libmachine: (ha-564251-m03)     <pae/>
	I0721 23:42:42.264494   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.264502   23196 main.go:141] libmachine: (ha-564251-m03)   </features>
	I0721 23:42:42.264508   23196 main.go:141] libmachine: (ha-564251-m03)   <cpu mode='host-passthrough'>
	I0721 23:42:42.264530   23196 main.go:141] libmachine: (ha-564251-m03)   
	I0721 23:42:42.264550   23196 main.go:141] libmachine: (ha-564251-m03)   </cpu>
	I0721 23:42:42.264563   23196 main.go:141] libmachine: (ha-564251-m03)   <os>
	I0721 23:42:42.264574   23196 main.go:141] libmachine: (ha-564251-m03)     <type>hvm</type>
	I0721 23:42:42.264585   23196 main.go:141] libmachine: (ha-564251-m03)     <boot dev='cdrom'/>
	I0721 23:42:42.264596   23196 main.go:141] libmachine: (ha-564251-m03)     <boot dev='hd'/>
	I0721 23:42:42.264609   23196 main.go:141] libmachine: (ha-564251-m03)     <bootmenu enable='no'/>
	I0721 23:42:42.264622   23196 main.go:141] libmachine: (ha-564251-m03)   </os>
	I0721 23:42:42.264630   23196 main.go:141] libmachine: (ha-564251-m03)   <devices>
	I0721 23:42:42.264637   23196 main.go:141] libmachine: (ha-564251-m03)     <disk type='file' device='cdrom'>
	I0721 23:42:42.264669   23196 main.go:141] libmachine: (ha-564251-m03)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/boot2docker.iso'/>
	I0721 23:42:42.264693   23196 main.go:141] libmachine: (ha-564251-m03)       <target dev='hdc' bus='scsi'/>
	I0721 23:42:42.264705   23196 main.go:141] libmachine: (ha-564251-m03)       <readonly/>
	I0721 23:42:42.264716   23196 main.go:141] libmachine: (ha-564251-m03)     </disk>
	I0721 23:42:42.264729   23196 main.go:141] libmachine: (ha-564251-m03)     <disk type='file' device='disk'>
	I0721 23:42:42.264741   23196 main.go:141] libmachine: (ha-564251-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0721 23:42:42.264758   23196 main.go:141] libmachine: (ha-564251-m03)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/ha-564251-m03.rawdisk'/>
	I0721 23:42:42.264769   23196 main.go:141] libmachine: (ha-564251-m03)       <target dev='hda' bus='virtio'/>
	I0721 23:42:42.264780   23196 main.go:141] libmachine: (ha-564251-m03)     </disk>
	I0721 23:42:42.264793   23196 main.go:141] libmachine: (ha-564251-m03)     <interface type='network'>
	I0721 23:42:42.264805   23196 main.go:141] libmachine: (ha-564251-m03)       <source network='mk-ha-564251'/>
	I0721 23:42:42.264818   23196 main.go:141] libmachine: (ha-564251-m03)       <model type='virtio'/>
	I0721 23:42:42.264827   23196 main.go:141] libmachine: (ha-564251-m03)     </interface>
	I0721 23:42:42.264837   23196 main.go:141] libmachine: (ha-564251-m03)     <interface type='network'>
	I0721 23:42:42.264853   23196 main.go:141] libmachine: (ha-564251-m03)       <source network='default'/>
	I0721 23:42:42.264869   23196 main.go:141] libmachine: (ha-564251-m03)       <model type='virtio'/>
	I0721 23:42:42.264880   23196 main.go:141] libmachine: (ha-564251-m03)     </interface>
	I0721 23:42:42.264890   23196 main.go:141] libmachine: (ha-564251-m03)     <serial type='pty'>
	I0721 23:42:42.264901   23196 main.go:141] libmachine: (ha-564251-m03)       <target port='0'/>
	I0721 23:42:42.264911   23196 main.go:141] libmachine: (ha-564251-m03)     </serial>
	I0721 23:42:42.264920   23196 main.go:141] libmachine: (ha-564251-m03)     <console type='pty'>
	I0721 23:42:42.264932   23196 main.go:141] libmachine: (ha-564251-m03)       <target type='serial' port='0'/>
	I0721 23:42:42.264946   23196 main.go:141] libmachine: (ha-564251-m03)     </console>
	I0721 23:42:42.264960   23196 main.go:141] libmachine: (ha-564251-m03)     <rng model='virtio'>
	I0721 23:42:42.264977   23196 main.go:141] libmachine: (ha-564251-m03)       <backend model='random'>/dev/random</backend>
	I0721 23:42:42.264992   23196 main.go:141] libmachine: (ha-564251-m03)     </rng>
	I0721 23:42:42.265007   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.265017   23196 main.go:141] libmachine: (ha-564251-m03)     
	I0721 23:42:42.265025   23196 main.go:141] libmachine: (ha-564251-m03)   </devices>
	I0721 23:42:42.265036   23196 main.go:141] libmachine: (ha-564251-m03) </domain>
	I0721 23:42:42.265045   23196 main.go:141] libmachine: (ha-564251-m03) 
	I0721 23:42:42.271675   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:a8:f0:9d in network default
	I0721 23:42:42.272233   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring networks are active...
	I0721 23:42:42.272255   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:42.272843   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring network default is active
	I0721 23:42:42.273161   23196 main.go:141] libmachine: (ha-564251-m03) Ensuring network mk-ha-564251 is active
	I0721 23:42:42.273605   23196 main.go:141] libmachine: (ha-564251-m03) Getting domain xml...
	I0721 23:42:42.274281   23196 main.go:141] libmachine: (ha-564251-m03) Creating domain...
	I0721 23:42:43.487914   23196 main.go:141] libmachine: (ha-564251-m03) Waiting to get IP...
	I0721 23:42:43.488790   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:43.489358   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:43.489388   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:43.489330   23971 retry.go:31] will retry after 223.451018ms: waiting for machine to come up
	I0721 23:42:43.714689   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:43.715254   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:43.715278   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:43.715174   23971 retry.go:31] will retry after 313.245752ms: waiting for machine to come up
	I0721 23:42:44.029580   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.030002   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.030032   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.029965   23971 retry.go:31] will retry after 307.421104ms: waiting for machine to come up
	I0721 23:42:44.339408   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.339832   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.339858   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.339790   23971 retry.go:31] will retry after 576.381475ms: waiting for machine to come up
	I0721 23:42:44.917449   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:44.917865   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:44.917893   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:44.917814   23971 retry.go:31] will retry after 739.541484ms: waiting for machine to come up
	I0721 23:42:45.658321   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:45.658656   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:45.658686   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:45.658632   23971 retry.go:31] will retry after 914.474856ms: waiting for machine to come up
	I0721 23:42:46.575185   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:46.575583   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:46.575604   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:46.575528   23971 retry.go:31] will retry after 1.017323514s: waiting for machine to come up
	I0721 23:42:47.594012   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:47.594565   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:47.594597   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:47.594530   23971 retry.go:31] will retry after 1.289736101s: waiting for machine to come up
	I0721 23:42:48.885806   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:48.886172   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:48.886200   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:48.886116   23971 retry.go:31] will retry after 1.778438113s: waiting for machine to come up
	I0721 23:42:50.666535   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:50.666966   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:50.666985   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:50.666930   23971 retry.go:31] will retry after 2.194283655s: waiting for machine to come up
	I0721 23:42:52.862586   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:52.863048   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:52.863093   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:52.863023   23971 retry.go:31] will retry after 2.561837275s: waiting for machine to come up
	I0721 23:42:55.427865   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:55.428311   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:55.428337   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:55.428264   23971 retry.go:31] will retry after 3.567006608s: waiting for machine to come up
	I0721 23:42:58.997015   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:42:58.997369   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find current IP address of domain ha-564251-m03 in network mk-ha-564251
	I0721 23:42:58.997390   23196 main.go:141] libmachine: (ha-564251-m03) DBG | I0721 23:42:58.997349   23971 retry.go:31] will retry after 2.970832116s: waiting for machine to come up
	I0721 23:43:01.970081   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:01.970646   23196 main.go:141] libmachine: (ha-564251-m03) Found IP for machine: 192.168.39.89
	I0721 23:43:01.970673   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has current primary IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:01.970682   23196 main.go:141] libmachine: (ha-564251-m03) Reserving static IP address...
	I0721 23:43:01.971028   23196 main.go:141] libmachine: (ha-564251-m03) DBG | unable to find host DHCP lease matching {name: "ha-564251-m03", mac: "52:54:00:9c:e6:b3", ip: "192.168.39.89"} in network mk-ha-564251
	I0721 23:43:02.042727   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Getting to WaitForSSH function...
	I0721 23:43:02.042759   23196 main.go:141] libmachine: (ha-564251-m03) Reserved static IP address: 192.168.39.89
	I0721 23:43:02.042772   23196 main.go:141] libmachine: (ha-564251-m03) Waiting for SSH to be available...
	I0721 23:43:02.045758   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.046196   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.046225   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.046410   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using SSH client type: external
	I0721 23:43:02.046431   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa (-rw-------)
	I0721 23:43:02.046465   23196 main.go:141] libmachine: (ha-564251-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0721 23:43:02.046484   23196 main.go:141] libmachine: (ha-564251-m03) DBG | About to run SSH command:
	I0721 23:43:02.046498   23196 main.go:141] libmachine: (ha-564251-m03) DBG | exit 0
	I0721 23:43:02.170333   23196 main.go:141] libmachine: (ha-564251-m03) DBG | SSH cmd err, output: <nil>: 
	I0721 23:43:02.170581   23196 main.go:141] libmachine: (ha-564251-m03) KVM machine creation complete!
	I0721 23:43:02.170916   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:43:02.171391   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:02.171562   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:02.171782   23196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0721 23:43:02.171799   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:43:02.173068   23196 main.go:141] libmachine: Detecting operating system of created instance...
	I0721 23:43:02.173085   23196 main.go:141] libmachine: Waiting for SSH to be available...
	I0721 23:43:02.173090   23196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0721 23:43:02.173096   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.175538   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.175906   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.175939   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.176080   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.176251   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.176421   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.176546   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.176721   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.176899   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.176910   23196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0721 23:43:02.281807   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:43:02.281831   23196 main.go:141] libmachine: Detecting the provisioner...
	I0721 23:43:02.281842   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.284709   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.285089   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.285112   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.285352   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.285540   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.285676   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.285794   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.285952   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.286121   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.286135   23196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0721 23:43:02.390968   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0721 23:43:02.391036   23196 main.go:141] libmachine: found compatible host: buildroot
	I0721 23:43:02.391045   23196 main.go:141] libmachine: Provisioning with buildroot...
	I0721 23:43:02.391052   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.391296   23196 buildroot.go:166] provisioning hostname "ha-564251-m03"
	I0721 23:43:02.391322   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.391526   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.394031   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.394382   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.394408   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.394499   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.394691   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.394842   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.394977   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.395125   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.395334   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.395352   23196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251-m03 && echo "ha-564251-m03" | sudo tee /etc/hostname
	I0721 23:43:02.513525   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251-m03
	
	I0721 23:43:02.513588   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.516196   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.516566   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.516590   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.516722   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.516910   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.517089   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.517216   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.517357   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.517582   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.517602   23196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:43:02.631105   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:43:02.631138   23196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:43:02.631165   23196 buildroot.go:174] setting up certificates
	I0721 23:43:02.631179   23196 provision.go:84] configureAuth start
	I0721 23:43:02.631188   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetMachineName
	I0721 23:43:02.631446   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:02.634128   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.634576   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.634624   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.634793   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.637233   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.637593   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.637619   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.637751   23196 provision.go:143] copyHostCerts
	I0721 23:43:02.637781   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:43:02.637810   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:43:02.637822   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:43:02.637892   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:43:02.637978   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:43:02.638014   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:43:02.638030   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:43:02.638069   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:43:02.638130   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:43:02.638150   23196 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:43:02.638157   23196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:43:02.638195   23196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:43:02.638258   23196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251-m03 san=[127.0.0.1 192.168.39.89 ha-564251-m03 localhost minikube]
	I0721 23:43:02.735309   23196 provision.go:177] copyRemoteCerts
	I0721 23:43:02.735359   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:43:02.735384   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.737765   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.738103   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.738134   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.738285   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.738451   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.738633   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.738767   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:02.821678   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:43:02.821745   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:43:02.843500   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:43:02.843563   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:43:02.864390   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:43:02.864455   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:43:02.886139   23196 provision.go:87] duration metric: took 254.946457ms to configureAuth
	I0721 23:43:02.886166   23196 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:43:02.886396   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:02.886460   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:02.889045   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.889432   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:02.889463   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:02.889618   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:02.889796   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.889949   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:02.890109   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:02.890242   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:02.890410   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:02.890425   23196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:43:03.138130   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:43:03.138156   23196 main.go:141] libmachine: Checking connection to Docker...
	I0721 23:43:03.138164   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetURL
	I0721 23:43:03.139494   23196 main.go:141] libmachine: (ha-564251-m03) DBG | Using libvirt version 6000000
	I0721 23:43:03.141768   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.142131   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.142157   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.142303   23196 main.go:141] libmachine: Docker is up and running!
	I0721 23:43:03.142319   23196 main.go:141] libmachine: Reticulating splines...
	I0721 23:43:03.142326   23196 client.go:171] duration metric: took 21.292314837s to LocalClient.Create
	I0721 23:43:03.142348   23196 start.go:167] duration metric: took 21.292379398s to libmachine.API.Create "ha-564251"
	I0721 23:43:03.142357   23196 start.go:293] postStartSetup for "ha-564251-m03" (driver="kvm2")
	I0721 23:43:03.142366   23196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:43:03.142387   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.142644   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:43:03.142673   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.144607   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.144929   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.144958   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.145078   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.145218   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.145369   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.145480   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.228172   23196 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:43:03.231951   23196 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:43:03.231987   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:43:03.232040   23196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:43:03.232104   23196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:43:03.232112   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:43:03.232188   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:43:03.241309   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:43:03.263190   23196 start.go:296] duration metric: took 120.821526ms for postStartSetup
	I0721 23:43:03.263233   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetConfigRaw
	I0721 23:43:03.263827   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:03.266290   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.266781   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.266811   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.267040   23196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:43:03.267243   23196 start.go:128] duration metric: took 21.438859784s to createHost
	I0721 23:43:03.267270   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.269462   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.269819   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.269834   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.270019   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.270207   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.270363   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.270525   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.270722   23196 main.go:141] libmachine: Using SSH client type: native
	I0721 23:43:03.270917   23196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0721 23:43:03.270931   23196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:43:03.375117   23196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605383.350180133
	
	I0721 23:43:03.375162   23196 fix.go:216] guest clock: 1721605383.350180133
	I0721 23:43:03.375172   23196 fix.go:229] Guest: 2024-07-21 23:43:03.350180133 +0000 UTC Remote: 2024-07-21 23:43:03.267255284 +0000 UTC m=+142.753883431 (delta=82.924849ms)
	I0721 23:43:03.375192   23196 fix.go:200] guest clock delta is within tolerance: 82.924849ms
	I0721 23:43:03.375200   23196 start.go:83] releasing machines lock for "ha-564251-m03", held for 21.546916603s
	I0721 23:43:03.375231   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.375490   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:03.377846   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.378222   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.378250   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.380443   23196 out.go:177] * Found network options:
	I0721 23:43:03.381872   23196 out.go:177]   - NO_PROXY=192.168.39.91,192.168.39.202
	W0721 23:43:03.383034   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0721 23:43:03.383054   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:43:03.383066   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383661   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383857   23196 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:43:03.383949   23196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:43:03.383985   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	W0721 23:43:03.384056   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0721 23:43:03.384080   23196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0721 23:43:03.384143   23196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:43:03.384165   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:43:03.386580   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.386810   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.386982   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.387005   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.387216   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.387400   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:03.387432   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:03.387479   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.387602   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:43:03.387744   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:43:03.387754   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.387885   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.387917   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:43:03.388032   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:43:03.617764   23196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:43:03.623563   23196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:43:03.623630   23196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:43:03.637910   23196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:43:03.637932   23196 start.go:495] detecting cgroup driver to use...
	I0721 23:43:03.637984   23196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:43:03.653039   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:43:03.664909   23196 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:43:03.664961   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:43:03.677456   23196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:43:03.689956   23196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:43:03.803962   23196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:43:03.936639   23196 docker.go:233] disabling docker service ...
	I0721 23:43:03.936714   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:43:03.951884   23196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:43:03.963888   23196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:43:04.094568   23196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:43:04.215209   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:43:04.229166   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:43:04.246213   23196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:43:04.246280   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.256127   23196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:43:04.256189   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.265950   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.276981   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.288430   23196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:43:04.299786   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.309646   23196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.325631   23196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:43:04.335342   23196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:43:04.343950   23196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0721 23:43:04.344002   23196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0721 23:43:04.355378   23196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:43:04.364357   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:04.491098   23196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:43:04.619871   23196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:43:04.619952   23196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:43:04.624297   23196 start.go:563] Will wait 60s for crictl version
	I0721 23:43:04.624357   23196 ssh_runner.go:195] Run: which crictl
	I0721 23:43:04.627832   23196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:43:04.665590   23196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:43:04.665664   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:43:04.692460   23196 ssh_runner.go:195] Run: crio --version
	I0721 23:43:04.720162   23196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:43:04.721498   23196 out.go:177]   - env NO_PROXY=192.168.39.91
	I0721 23:43:04.722768   23196 out.go:177]   - env NO_PROXY=192.168.39.91,192.168.39.202
	I0721 23:43:04.723848   23196 main.go:141] libmachine: (ha-564251-m03) Calling .GetIP
	I0721 23:43:04.726673   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:04.727088   23196 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:43:04.727118   23196 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:43:04.727384   23196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:43:04.731216   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:43:04.742584   23196 mustload.go:65] Loading cluster: ha-564251
	I0721 23:43:04.742825   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:04.743220   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:04.743284   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:04.758771   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0721 23:43:04.759271   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:04.759737   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:04.759762   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:04.760048   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:04.760267   23196 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:43:04.762317   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:43:04.762687   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:04.762729   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:04.778848   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0721 23:43:04.779235   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:04.779685   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:04.779707   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:04.779993   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:04.780189   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:43:04.780318   23196 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.89
	I0721 23:43:04.780329   23196 certs.go:194] generating shared ca certs ...
	I0721 23:43:04.780347   23196 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:04.780458   23196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:43:04.780494   23196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:43:04.780503   23196 certs.go:256] generating profile certs ...
	I0721 23:43:04.780566   23196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:43:04.780588   23196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0
	I0721 23:43:04.780604   23196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.89 192.168.39.254]
	I0721 23:43:05.011110   23196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 ...
	I0721 23:43:05.011146   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0: {Name:mk0d14ced944e14d8abaa56474e12ed7f0f73217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:05.011332   23196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0 ...
	I0721 23:43:05.011347   23196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0: {Name:mk7d7654d81c42e493ce8909de430daf29543ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:43:05.011440   23196 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.a4a5f4d0 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:43:05.011607   23196 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.a4a5f4d0 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:43:05.011791   23196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:43:05.011810   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:43:05.011822   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:43:05.011832   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:43:05.011842   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:43:05.011852   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:43:05.011864   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:43:05.011874   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:43:05.011885   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:43:05.011927   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:43:05.011955   23196 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:43:05.011964   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:43:05.011985   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:43:05.012005   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:43:05.012025   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:43:05.012058   23196 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:43:05.012085   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.012099   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.012112   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.012143   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:43:05.014986   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:05.015468   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:43:05.015494   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:05.015661   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:43:05.015837   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:43:05.016017   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:43:05.016152   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:43:05.090966   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0721 23:43:05.095673   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0721 23:43:05.107966   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0721 23:43:05.111908   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0721 23:43:05.122311   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0721 23:43:05.125941   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0721 23:43:05.135113   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0721 23:43:05.138926   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0721 23:43:05.148119   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0721 23:43:05.151668   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0721 23:43:05.160580   23196 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0721 23:43:05.163941   23196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0721 23:43:05.172840   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:43:05.198333   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:43:05.222050   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:43:05.243975   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:43:05.268300   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0721 23:43:05.290810   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:43:05.312478   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:43:05.334568   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:43:05.356078   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:43:05.377090   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:43:05.398048   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:43:05.418825   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0721 23:43:05.433738   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0721 23:43:05.450154   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0721 23:43:05.466683   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0721 23:43:05.481508   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0721 23:43:05.498633   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0721 23:43:05.513810   23196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0721 23:43:05.528967   23196 ssh_runner.go:195] Run: openssl version
	I0721 23:43:05.534351   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:43:05.544208   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.548119   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.548161   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:43:05.553477   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:43:05.564641   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:43:05.575638   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.579720   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.579770   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:43:05.584920   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:43:05.594278   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:43:05.603788   23196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.607648   23196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.607686   23196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:43:05.613043   23196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:43:05.624031   23196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:43:05.627604   23196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:43:05.627656   23196 kubeadm.go:934] updating node {m03 192.168.39.89 8443 v1.30.3 crio true true} ...
	I0721 23:43:05.627739   23196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:43:05.627766   23196 kube-vip.go:115] generating kube-vip config ...
	I0721 23:43:05.627802   23196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:43:05.643803   23196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:43:05.643866   23196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:43:05.643927   23196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:43:05.652073   23196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0721 23:43:05.652127   23196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0721 23:43:05.660945   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0721 23:43:05.660953   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0721 23:43:05.660964   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:43:05.660963   23196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0721 23:43:05.660978   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:43:05.660989   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:43:05.661011   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0721 23:43:05.661038   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0721 23:43:05.665118   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0721 23:43:05.665142   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0721 23:43:05.700161   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0721 23:43:05.700165   23196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:43:05.700209   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0721 23:43:05.700311   23196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0721 23:43:05.756662   23196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0721 23:43:05.756712   23196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0721 23:43:06.530873   23196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0721 23:43:06.539897   23196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0721 23:43:06.556272   23196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:43:06.572072   23196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:43:06.587303   23196 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:43:06.590895   23196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:43:06.602268   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:06.711722   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:43:06.727567   23196 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:43:06.728052   23196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:43:06.728104   23196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:43:06.744693   23196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0721 23:43:06.745092   23196 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:43:06.746102   23196 main.go:141] libmachine: Using API Version  1
	I0721 23:43:06.746131   23196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:43:06.746487   23196 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:43:06.746748   23196 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:43:06.746904   23196 start.go:317] joinCluster: &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:43:06.747060   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0721 23:43:06.747082   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:43:06.750062   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:06.750557   23196 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:43:06.750584   23196 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:43:06.750734   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:43:06.750902   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:43:06.751027   23196 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:43:06.751130   23196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:43:06.904912   23196 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:43:06.904964   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 135g62.u4ctzsuofj006i1y --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I0721 23:43:30.357894   23196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 135g62.u4ctzsuofj006i1y --discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-564251-m03 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (23.45289729s)
	I0721 23:43:30.357936   23196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0721 23:43:30.872196   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-564251-m03 minikube.k8s.io/updated_at=2024_07_21T23_43_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-564251 minikube.k8s.io/primary=false
	I0721 23:43:31.000136   23196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-564251-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0721 23:43:31.111420   23196 start.go:319] duration metric: took 24.364514251s to joinCluster
	I0721 23:43:31.111497   23196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0721 23:43:31.111817   23196 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:43:31.112658   23196 out.go:177] * Verifying Kubernetes components...
	I0721 23:43:31.114080   23196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:43:31.402850   23196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:43:31.424762   23196 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:43:31.424966   23196 kapi.go:59] client config for ha-564251: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key", CAFile:"/home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0721 23:43:31.425020   23196 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.91:8443
	I0721 23:43:31.425255   23196 node_ready.go:35] waiting up to 6m0s for node "ha-564251-m03" to be "Ready" ...
	I0721 23:43:31.425352   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:31.425362   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:31.425369   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:31.425374   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:31.429044   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:31.925877   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:31.925907   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:31.925920   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:31.925926   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:31.929258   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:32.426207   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:32.426226   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:32.426235   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:32.426239   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:32.429776   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:32.925833   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:32.925855   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:32.925866   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:32.925875   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:32.928831   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:33.426060   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:33.426078   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:33.426085   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:33.426092   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:33.429268   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:33.429808   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:33.926012   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:33.926032   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:33.926041   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:33.926046   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:33.929721   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:34.425828   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:34.425847   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:34.425854   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:34.425860   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:34.429237   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:34.926189   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:34.926209   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:34.926217   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:34.926223   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:34.929615   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.425475   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:35.425494   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:35.425502   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:35.425507   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:35.428744   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.925770   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:35.925791   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:35.925799   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:35.925803   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:35.929136   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:35.929975   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:36.425884   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:36.425902   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:36.425910   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:36.425915   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:36.429398   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:36.926319   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:36.926341   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:36.926351   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:36.926356   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:36.930368   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:37.426492   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:37.426513   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:37.426525   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:37.426529   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:37.430591   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:37.925539   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:37.925560   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:37.925568   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:37.925572   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:37.928720   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:38.425635   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:38.425658   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:38.425666   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:38.425671   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:38.428524   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:38.429051   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:38.926217   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:38.926239   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:38.926247   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:38.926252   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:38.929862   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:39.425450   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:39.425474   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:39.425486   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:39.425492   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:39.428216   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:39.926482   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:39.926508   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:39.926519   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:39.926526   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:39.930056   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:40.425695   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:40.425713   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:40.425725   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:40.425729   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:40.431532   23196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0721 23:43:40.432222   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:40.925702   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:40.925721   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:40.925729   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:40.925732   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:40.928883   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:41.425892   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:41.425913   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:41.425921   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:41.425927   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:41.428966   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:41.925793   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:41.925815   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:41.925822   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:41.925825   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:41.928750   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:42.425643   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:42.425663   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:42.425670   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:42.425674   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:42.429127   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:42.926187   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:42.926210   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:42.926218   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:42.926222   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:42.929588   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:42.930141   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:43.426291   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:43.426312   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:43.426318   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:43.426324   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:43.429259   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:43.926114   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:43.926138   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:43.926146   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:43.926149   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:43.929325   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:44.425428   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:44.425447   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:44.425456   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:44.425460   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:44.428770   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:44.925544   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:44.925563   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:44.925568   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:44.925571   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:44.929039   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:45.425918   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:45.425936   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:45.425944   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:45.425948   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:45.428920   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:45.429608   23196 node_ready.go:53] node "ha-564251-m03" has status "Ready":"False"
	I0721 23:43:45.925972   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:45.925997   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:45.926006   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:45.926009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:45.929425   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:46.425884   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:46.425903   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:46.425911   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:46.425931   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:46.429760   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:46.925827   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:46.925847   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:46.925854   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:46.925859   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:46.929370   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.425444   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:47.425462   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.425470   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.425474   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.428676   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.926474   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:47.926498   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.926508   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.926514   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.930150   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.930828   23196 node_ready.go:49] node "ha-564251-m03" has status "Ready":"True"
	I0721 23:43:47.930847   23196 node_ready.go:38] duration metric: took 16.50556977s for node "ha-564251-m03" to be "Ready" ...
	I0721 23:43:47.930855   23196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:43:47.930908   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:47.930916   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.930923   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.930926   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.939306   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:47.946025   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.946096   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bsbzk
	I0721 23:43:47.946105   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.946111   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.946116   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.949284   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:47.949886   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.949901   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.949908   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.949913   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.952737   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.953346   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.953362   23196 pod_ready.go:81] duration metric: took 7.314216ms for pod "coredns-7db6d8ff4d-bsbzk" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.953370   23196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.953414   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f4lqn
	I0721 23:43:47.953421   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.953429   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.953433   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.956032   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.956555   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.956574   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.956581   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.956587   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.959261   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.959848   23196 pod_ready.go:92] pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.959861   23196 pod_ready.go:81] duration metric: took 6.485232ms for pod "coredns-7db6d8ff4d-f4lqn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.959868   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.959920   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251
	I0721 23:43:47.959929   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.959935   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.959938   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.962303   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.963048   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:47.963065   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.963074   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.963077   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.965396   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.965885   23196 pod_ready.go:92] pod "etcd-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.965898   23196 pod_ready.go:81] duration metric: took 6.02401ms for pod "etcd-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.965904   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.965943   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m02
	I0721 23:43:47.965952   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.965958   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.965963   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.968325   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.968854   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:47.968867   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:47.968873   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:47.968878   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:47.971089   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:47.971519   23196 pod_ready.go:92] pod "etcd-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:47.971535   23196 pod_ready.go:81] duration metric: took 5.625442ms for pod "etcd-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:47.971543   23196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.126929   23196 request.go:629] Waited for 155.327284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m03
	I0721 23:43:48.127015   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/etcd-ha-564251-m03
	I0721 23:43:48.127025   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.127036   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.127047   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.131167   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:48.327206   23196 request.go:629] Waited for 195.358079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:48.327265   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:48.327273   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.327286   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.327295   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.331699   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:48.332308   23196 pod_ready.go:92] pod "etcd-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:48.332333   23196 pod_ready.go:81] duration metric: took 360.782776ms for pod "etcd-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.332358   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.526855   23196 request.go:629] Waited for 194.432062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:43:48.526936   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251
	I0721 23:43:48.526945   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.526955   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.526964   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.530671   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:48.726624   23196 request.go:629] Waited for 195.327692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:48.726683   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:48.726690   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.726700   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.726705   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.730171   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:48.730798   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:48.730817   23196 pod_ready.go:81] duration metric: took 398.451431ms for pod "kube-apiserver-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.730825   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:48.926691   23196 request.go:629] Waited for 195.796759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:43:48.926769   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m02
	I0721 23:43:48.926774   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:48.926787   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:48.926795   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:48.930198   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.127318   23196 request.go:629] Waited for 196.366628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:49.127379   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:49.127384   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.127391   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.127394   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.130655   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.131201   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.131219   23196 pod_ready.go:81] duration metric: took 400.386742ms for pod "kube-apiserver-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.131228   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.327346   23196 request.go:629] Waited for 196.060541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m03
	I0721 23:43:49.327415   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-564251-m03
	I0721 23:43:49.327421   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.327428   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.327433   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.330426   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:49.526550   23196 request.go:629] Waited for 195.274214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:49.526614   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:49.526621   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.526632   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.526637   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.529309   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:49.529956   23196 pod_ready.go:92] pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.529973   23196 pod_ready.go:81] duration metric: took 398.73979ms for pod "kube-apiserver-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.529983   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.727067   23196 request.go:629] Waited for 197.025666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:43:49.727144   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251
	I0721 23:43:49.727151   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.727161   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.727170   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.731068   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.926842   23196 request.go:629] Waited for 194.942395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:49.926894   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:49.926905   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:49.926914   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:49.926921   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:49.930093   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:49.930707   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:49.930727   23196 pod_ready.go:81] duration metric: took 400.737593ms for pod "kube-controller-manager-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:49.930736   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.126755   23196 request.go:629] Waited for 195.962238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:43:50.126820   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m02
	I0721 23:43:50.126826   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.126846   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.126851   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.130343   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:50.327463   23196 request.go:629] Waited for 196.372309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:50.327509   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:50.327514   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.327521   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.327532   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.330198   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:50.330812   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:50.330833   23196 pod_ready.go:81] duration metric: took 400.088718ms for pod "kube-controller-manager-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.330845   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.526917   23196 request.go:629] Waited for 196.002846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m03
	I0721 23:43:50.526983   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251-m03
	I0721 23:43:50.526991   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.527004   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.527009   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.535161   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:50.727367   23196 request.go:629] Waited for 191.442236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:50.727434   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:50.727441   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.727450   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.727455   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.731714   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:50.732519   23196 pod_ready.go:92] pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:50.732536   23196 pod_ready.go:81] duration metric: took 401.68329ms for pod "kube-controller-manager-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.732546   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2xlks" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:50.927160   23196 request.go:629] Waited for 194.546992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xlks
	I0721 23:43:50.927253   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xlks
	I0721 23:43:50.927265   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:50.927275   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:50.927280   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:50.931547   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:51.126846   23196 request.go:629] Waited for 194.351495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:51.126923   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:51.126930   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.126940   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.126951   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.131236   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:51.132019   23196 pod_ready.go:92] pod "kube-proxy-2xlks" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.132043   23196 pod_ready.go:81] duration metric: took 399.49068ms for pod "kube-proxy-2xlks" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.132053   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.326504   23196 request.go:629] Waited for 194.390902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:43:51.326554   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8c6vn
	I0721 23:43:51.326559   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.326566   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.326569   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.330347   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.526893   23196 request.go:629] Waited for 195.395181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:51.526957   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:51.526964   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.526975   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.526980   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.530104   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.530666   23196 pod_ready.go:92] pod "kube-proxy-8c6vn" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.530690   23196 pod_ready.go:81] duration metric: took 398.627758ms for pod "kube-proxy-8c6vn" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.530699   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.726585   23196 request.go:629] Waited for 195.814641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:43:51.726664   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srpl8
	I0721 23:43:51.726670   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.726678   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.726683   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.729647   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:51.927272   23196 request.go:629] Waited for 196.827193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:51.927327   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:51.927331   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:51.927338   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:51.927342   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:51.930672   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:51.931305   23196 pod_ready.go:92] pod "kube-proxy-srpl8" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:51.931324   23196 pod_ready.go:81] duration metric: took 400.618664ms for pod "kube-proxy-srpl8" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:51.931334   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.127505   23196 request.go:629] Waited for 196.102945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:43:52.127562   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251
	I0721 23:43:52.127569   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.127579   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.127584   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.130733   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.327022   23196 request.go:629] Waited for 195.369501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:52.327079   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251
	I0721 23:43:52.327084   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.327091   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.327094   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.329923   23196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0721 23:43:52.330532   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:52.330548   23196 pod_ready.go:81] duration metric: took 399.206943ms for pod "kube-scheduler-ha-564251" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.330556   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.526589   23196 request.go:629] Waited for 195.962537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:43:52.526687   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m02
	I0721 23:43:52.526696   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.526704   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.526711   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.529872   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.727067   23196 request.go:629] Waited for 196.386081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:52.727139   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m02
	I0721 23:43:52.727144   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.727152   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.727159   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.730488   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:52.731218   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:52.731240   23196 pod_ready.go:81] duration metric: took 400.676697ms for pod "kube-scheduler-ha-564251-m02" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.731257   23196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:52.927477   23196 request.go:629] Waited for 196.145575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m03
	I0721 23:43:52.927558   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-564251-m03
	I0721 23:43:52.927564   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:52.927579   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:52.927583   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:52.930775   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.126671   23196 request.go:629] Waited for 195.310681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:53.126719   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes/ha-564251-m03
	I0721 23:43:53.126731   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.126748   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.126755   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.129792   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.130351   23196 pod_ready.go:92] pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace has status "Ready":"True"
	I0721 23:43:53.130369   23196 pod_ready.go:81] duration metric: took 399.104538ms for pod "kube-scheduler-ha-564251-m03" in "kube-system" namespace to be "Ready" ...
	I0721 23:43:53.130379   23196 pod_ready.go:38] duration metric: took 5.19951489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:43:53.130393   23196 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:43:53.130440   23196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:43:53.146643   23196 api_server.go:72] duration metric: took 22.035111538s to wait for apiserver process to appear ...
	I0721 23:43:53.146666   23196 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:43:53.146687   23196 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0721 23:43:53.152312   23196 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0721 23:43:53.152384   23196 round_trippers.go:463] GET https://192.168.39.91:8443/version
	I0721 23:43:53.152395   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.152405   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.152416   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.153278   23196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0721 23:43:53.153329   23196 api_server.go:141] control plane version: v1.30.3
	I0721 23:43:53.153342   23196 api_server.go:131] duration metric: took 6.669849ms to wait for apiserver health ...
	I0721 23:43:53.153351   23196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:43:53.326762   23196 request.go:629] Waited for 173.343527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.326849   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.326862   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.326874   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.326886   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.334330   23196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0721 23:43:53.340512   23196 system_pods.go:59] 24 kube-system pods found
	I0721 23:43:53.340538   23196 system_pods.go:61] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:43:53.340543   23196 system_pods.go:61] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:43:53.340547   23196 system_pods.go:61] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:43:53.340550   23196 system_pods.go:61] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:43:53.340554   23196 system_pods.go:61] "etcd-ha-564251-m03" [54c2633e-32df-4367-affb-a723188f5249] Running
	I0721 23:43:53.340557   23196 system_pods.go:61] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:43:53.340560   23196 system_pods.go:61] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:43:53.340563   23196 system_pods.go:61] "kindnet-s2t8k" [96cd07e3-b249-4f1b-a6c0-6e2bc2791df1] Running
	I0721 23:43:53.340566   23196 system_pods.go:61] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:43:53.340569   23196 system_pods.go:61] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:43:53.340571   23196 system_pods.go:61] "kube-apiserver-ha-564251-m03" [ecb696ba-6d8b-43e2-a700-f4e60e8b6bfd] Running
	I0721 23:43:53.340575   23196 system_pods.go:61] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:43:53.340577   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:43:53.340580   23196 system_pods.go:61] "kube-controller-manager-ha-564251-m03" [bb892047-2a7f-49ad-ae3b-d596e27123d4] Running
	I0721 23:43:53.340583   23196 system_pods.go:61] "kube-proxy-2xlks" [67ba351a-20c6-442f-bc11-d1363ee387f7] Running
	I0721 23:43:53.340586   23196 system_pods.go:61] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:43:53.340589   23196 system_pods.go:61] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:43:53.340592   23196 system_pods.go:61] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:43:53.340594   23196 system_pods.go:61] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:43:53.340597   23196 system_pods.go:61] "kube-scheduler-ha-564251-m03" [8242efc1-a265-4d55-aa13-b6ffc5fafabb] Running
	I0721 23:43:53.340600   23196 system_pods.go:61] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:43:53.340603   23196 system_pods.go:61] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:43:53.340606   23196 system_pods.go:61] "kube-vip-ha-564251-m03" [acec0505-d562-4e84-8d2c-355d77f73d71] Running
	I0721 23:43:53.340609   23196 system_pods.go:61] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:43:53.340614   23196 system_pods.go:74] duration metric: took 187.254705ms to wait for pod list to return data ...
	I0721 23:43:53.340624   23196 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:43:53.527024   23196 request.go:629] Waited for 186.337733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:43:53.527083   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/default/serviceaccounts
	I0721 23:43:53.527091   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.527101   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.527113   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.530370   23196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0721 23:43:53.530497   23196 default_sa.go:45] found service account: "default"
	I0721 23:43:53.530514   23196 default_sa.go:55] duration metric: took 189.883296ms for default service account to be created ...
	I0721 23:43:53.530525   23196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:43:53.726973   23196 request.go:629] Waited for 196.366837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.727061   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/namespaces/kube-system/pods
	I0721 23:43:53.727073   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.727084   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.727095   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.735804   23196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0721 23:43:53.742063   23196 system_pods.go:86] 24 kube-system pods found
	I0721 23:43:53.742087   23196 system_pods.go:89] "coredns-7db6d8ff4d-bsbzk" [7d58d6f8-de63-49bf-9017-3cac954350d0] Running
	I0721 23:43:53.742092   23196 system_pods.go:89] "coredns-7db6d8ff4d-f4lqn" [ebae638d-339c-4241-a5b3-ab4c766efc2f] Running
	I0721 23:43:53.742096   23196 system_pods.go:89] "etcd-ha-564251" [ba57dacd-8bb8-4fc5-8c55-ab660c773c4a] Running
	I0721 23:43:53.742100   23196 system_pods.go:89] "etcd-ha-564251-m02" [4c0aa7df-9cac-4a18-a30f-78412cfce28d] Running
	I0721 23:43:53.742104   23196 system_pods.go:89] "etcd-ha-564251-m03" [54c2633e-32df-4367-affb-a723188f5249] Running
	I0721 23:43:53.742109   23196 system_pods.go:89] "kindnet-99b2q" [84ff92b4-7ad2-44e7-a6e6-89dcbb9413e2] Running
	I0721 23:43:53.742115   23196 system_pods.go:89] "kindnet-jz5md" [f109e939-9f9b-4fa8-b844-4c2652615933] Running
	I0721 23:43:53.742121   23196 system_pods.go:89] "kindnet-s2t8k" [96cd07e3-b249-4f1b-a6c0-6e2bc2791df1] Running
	I0721 23:43:53.742129   23196 system_pods.go:89] "kube-apiserver-ha-564251" [284aac5b-c6af-4a2f-bece-dfb3ca4fde87] Running
	I0721 23:43:53.742139   23196 system_pods.go:89] "kube-apiserver-ha-564251-m02" [291efb5d-a0a6-4edd-8258-4a2b85f91e6f] Running
	I0721 23:43:53.742144   23196 system_pods.go:89] "kube-apiserver-ha-564251-m03" [ecb696ba-6d8b-43e2-a700-f4e60e8b6bfd] Running
	I0721 23:43:53.742150   23196 system_pods.go:89] "kube-controller-manager-ha-564251" [44710bc5-1824-4df6-b321-ac7db26d18a5] Running
	I0721 23:43:53.742159   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m02" [ec0dd23d-58ee-49ca-b8e4-29ad2032a915] Running
	I0721 23:43:53.742166   23196 system_pods.go:89] "kube-controller-manager-ha-564251-m03" [bb892047-2a7f-49ad-ae3b-d596e27123d4] Running
	I0721 23:43:53.742171   23196 system_pods.go:89] "kube-proxy-2xlks" [67ba351a-20c6-442f-bc11-d1363ee387f7] Running
	I0721 23:43:53.742177   23196 system_pods.go:89] "kube-proxy-8c6vn" [5b85365a-8a91-4e17-be4f-efc76e876e35] Running
	I0721 23:43:53.742181   23196 system_pods.go:89] "kube-proxy-srpl8" [faae2035-d506-4dd6-98b6-c3c5f5b53e84] Running
	I0721 23:43:53.742187   23196 system_pods.go:89] "kube-scheduler-ha-564251" [c7cd3ce3-94c8-4369-ba32-b832940c6aec] Running
	I0721 23:43:53.742191   23196 system_pods.go:89] "kube-scheduler-ha-564251-m02" [23912687-c898-47f3-91a9-c8784fb5d557] Running
	I0721 23:43:53.742197   23196 system_pods.go:89] "kube-scheduler-ha-564251-m03" [8242efc1-a265-4d55-aa13-b6ffc5fafabb] Running
	I0721 23:43:53.742201   23196 system_pods.go:89] "kube-vip-ha-564251" [e865cc87-be77-43f3-bef2-4c47dbe7ffe5] Running
	I0721 23:43:53.742206   23196 system_pods.go:89] "kube-vip-ha-564251-m02" [84f924b2-df09-413e-8a12-658116f072d3] Running
	I0721 23:43:53.742210   23196 system_pods.go:89] "kube-vip-ha-564251-m03" [acec0505-d562-4e84-8d2c-355d77f73d71] Running
	I0721 23:43:53.742216   23196 system_pods.go:89] "storage-provisioner" [75c1992e-23ca-41e0-b046-1b70a6f6f63a] Running
	I0721 23:43:53.742225   23196 system_pods.go:126] duration metric: took 211.693904ms to wait for k8s-apps to be running ...
	I0721 23:43:53.742237   23196 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:43:53.742283   23196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:43:53.757770   23196 system_svc.go:56] duration metric: took 15.524949ms WaitForService to wait for kubelet
	I0721 23:43:53.757799   23196 kubeadm.go:582] duration metric: took 22.64627139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:43:53.757815   23196 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:43:53.926970   23196 request.go:629] Waited for 169.07378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.91:8443/api/v1/nodes
	I0721 23:43:53.927030   23196 round_trippers.go:463] GET https://192.168.39.91:8443/api/v1/nodes
	I0721 23:43:53.927038   23196 round_trippers.go:469] Request Headers:
	I0721 23:43:53.927049   23196 round_trippers.go:473]     Accept: application/json, */*
	I0721 23:43:53.927056   23196 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0721 23:43:53.931456   23196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0721 23:43:53.932551   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932572   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932584   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932587   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932590   23196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:43:53.932593   23196 node_conditions.go:123] node cpu capacity is 2
	I0721 23:43:53.932598   23196 node_conditions.go:105] duration metric: took 174.777231ms to run NodePressure ...
	I0721 23:43:53.932608   23196 start.go:241] waiting for startup goroutines ...
	I0721 23:43:53.932626   23196 start.go:255] writing updated cluster config ...
	I0721 23:43:53.932865   23196 ssh_runner.go:195] Run: rm -f paused
	I0721 23:43:53.984198   23196 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0721 23:43:53.985909   23196 out.go:177] * Done! kubectl is now configured to use "ha-564251" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.177852721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605715177827800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e99ecac-d8fe-4ffb-8334-b4135f064e5a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.178611454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42416f8d-9fcc-40f8-b9a8-f4d94e51e26f name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.178664313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42416f8d-9fcc-40f8-b9a8-f4d94e51e26f name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.178908986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42416f8d-9fcc-40f8-b9a8-f4d94e51e26f name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.222154214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=134c49fe-c919-4fb8-88d1-19853b76cae1 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.222227671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=134c49fe-c919-4fb8-88d1-19853b76cae1 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.223441321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20ecd7cc-0fa6-4d14-848c-ed62035be19b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.224022985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605715223999122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ecd7cc-0fa6-4d14-848c-ed62035be19b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.224471409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c87690d6-4882-4142-9d8a-c0c62fe53765 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.224524699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c87690d6-4882-4142-9d8a-c0c62fe53765 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.224870424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c87690d6-4882-4142-9d8a-c0c62fe53765 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.271174463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21fae60b-140e-4af7-aa00-7fc032c965e8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.271245142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21fae60b-140e-4af7-aa00-7fc032c965e8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.273338059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43571664-57ae-4afb-9b7c-9bba060eb202 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.273919447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605715273895081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43571664-57ae-4afb-9b7c-9bba060eb202 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.274411451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72d1844f-e716-4490-8600-eceaffb78a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.274464623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72d1844f-e716-4490-8600-eceaffb78a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.274738592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72d1844f-e716-4490-8600-eceaffb78a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.318279629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf48cf08-b0e6-476e-bfcc-6b7e8a420996 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.318356396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf48cf08-b0e6-476e-bfcc-6b7e8a420996 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.319793295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88b35ff0-20a6-4afc-a027-fd72abb57bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.320213885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721605715320193992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88b35ff0-20a6-4afc-a027-fd72abb57bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.320768151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d5085ff-6b4f-46b8-aada-474981911484 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.320819079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d5085ff-6b4f-46b8-aada-474981911484 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:48:35 ha-564251 crio[681]: time="2024-07-21 23:48:35.321057583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605438091236780,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306949878120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0,PodSandboxId:2cd28c9ca5ac8e1abd87d642c9ce470b7f74994d1daf2847a46cbfd9d484f9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721605306941293238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605306869088689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-33
9c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721605295239532071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172160529
1575665482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007,PodSandboxId:5d8c01689d032c61a375f6d41985d763c38f024c41dbf3ad2fa6782c9cb654f9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172160527350
0133899,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fcfe16697573d7920cf75add2f90240,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605270815923890,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed,PodSandboxId:4c669b6cce38be1c1629208e0a481d2b0cdaacde4c7d151d08410182e750dd2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605270778824115,Labels:map[string]string{io.kubernetes.contain
er.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3,PodSandboxId:08f4ba91fc6acb867b58183f7e7ec64c2ea587bb2b6d4211b99026ba25fc51c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605270766867318,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605270662100386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d5085ff-6b4f-46b8-aada-474981911484 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3769ca1c0d189       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4399dac80b572       busybox-fc5497c4f-tvjh7
	fd88a6f6b66dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   60549b9fc09ba       coredns-7db6d8ff4d-bsbzk
	db39c7c7e0f7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2cd28c9ca5ac8       storage-provisioner
	d708ea287a4e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3cf5796c9ffab       coredns-7db6d8ff4d-f4lqn
	b2afbf6c4dfa0       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   8c7a9ed52b5b4       kindnet-jz5md
	777c36438bf0f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   997932c064fbe       kube-proxy-srpl8
	bd2d1274e4986       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   5d8c01689d032       kube-vip-ha-564251
	22bd5cac142d6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   2d4165e9b2df2       kube-scheduler-ha-564251
	fb0b898c77f8d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   4c669b6cce38b       kube-apiserver-ha-564251
	17153bc2e8cea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   08f4ba91fc6ac       kube-controller-manager-ha-564251
	9863a1f5cf334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   bc6861a50f8f6       etcd-ha-564251
	
	
	==> coredns [d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091] <==
	[INFO] 10.244.1.2:43405 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00026763s
	[INFO] 10.244.2.2:54021 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001690935s
	[INFO] 10.244.2.2:51685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084922s
	[INFO] 10.244.2.2:33159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100397s
	[INFO] 10.244.2.2:33164 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122928s
	[INFO] 10.244.2.2:43819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076913s
	[INFO] 10.244.2.2:59599 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063404s
	[INFO] 10.244.0.4:53831 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001206293s
	[INFO] 10.244.0.4:57062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100467s
	[INFO] 10.244.1.2:34188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014651s
	[INFO] 10.244.1.2:41501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011577s
	[INFO] 10.244.1.2:34022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084216s
	[INFO] 10.244.2.2:36668 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118928s
	[INFO] 10.244.0.4:60553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129219s
	[INFO] 10.244.0.4:34229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158514s
	[INFO] 10.244.0.4:35099 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013345s
	[INFO] 10.244.1.2:60128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204062s
	[INFO] 10.244.1.2:51220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169537s
	[INFO] 10.244.1.2:50118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000261213s
	[INFO] 10.244.2.2:42616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012241s
	[INFO] 10.244.2.2:51984 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223089s
	[INFO] 10.244.2.2:60866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100348s
	[INFO] 10.244.0.4:38494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093863s
	[INFO] 10.244.0.4:56964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080856s
	[INFO] 10.244.0.4:37413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172185s
	
	
	==> coredns [fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50666 - 38198 "HINFO IN 5523897286626880771.7232038906359800539. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010268589s
	[INFO] 10.244.1.2:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000556256s
	[INFO] 10.244.1.2:48153 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.011402035s
	[INFO] 10.244.2.2:35506 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000434307s
	[INFO] 10.244.2.2:50811 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001393702s
	[INFO] 10.244.1.2:47400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171001s
	[INFO] 10.244.1.2:51399 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162839s
	[INFO] 10.244.2.2:46920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139973s
	[INFO] 10.244.2.2:45334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001092856s
	[INFO] 10.244.0.4:53396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109772s
	[INFO] 10.244.0.4:54634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652249s
	[INFO] 10.244.0.4:45490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147442s
	[INFO] 10.244.0.4:46915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090743s
	[INFO] 10.244.0.4:60906 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127948s
	[INFO] 10.244.0.4:36593 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118548s
	[INFO] 10.244.1.2:59477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105785s
	[INFO] 10.244.2.2:48044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138738s
	[INFO] 10.244.2.2:48209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093024s
	[INFO] 10.244.2.2:54967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089783s
	[INFO] 10.244.0.4:47425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088831s
	[INFO] 10.244.1.2:59455 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131678s
	[INFO] 10.244.2.2:60606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089108s
	[INFO] 10.244.0.4:46173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097876s
	
	
	==> describe nodes <==
	Name:               ha-564251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:48:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:44:23 +0000   Sun, 21 Jul 2024 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-564251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83877339e2d74557b5e6d75fd0a30c5b
	  System UUID:                83877339-e2d7-4557-b5e6-d75fd0a30c5b
	  Boot ID:                    4d4acbc6-fdf1-4a14-b622-8bad377224dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvjh7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 coredns-7db6d8ff4d-bsbzk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 coredns-7db6d8ff4d-f4lqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 etcd-ha-564251                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m15s
	  kube-system                 kindnet-jz5md                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m5s
	  kube-system                 kube-apiserver-ha-564251             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-controller-manager-ha-564251    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-proxy-srpl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-scheduler-ha-564251             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-vip-ha-564251                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m3s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m25s (x7 over 7m25s)  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m25s)  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m25s)  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m15s                  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s                  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s                  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal  NodeReady                6m49s                  kubelet          Node ha-564251 status is now: NodeReady
	  Normal  RegisteredNode           6m1s                   node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	
	
	Name:               ha-564251-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:42:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:45:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Jul 2024 23:44:19 +0000   Sun, 21 Jul 2024 23:45:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-564251-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8db54debc3f459a84145497caff8bc1
	  System UUID:                e8db54de-bc3f-459a-8414-5497caff8bc1
	  Boot ID:                    e9c8db11-8f9d-4e77-bb70-f3aef06af356
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2jrmb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 etcd-ha-564251-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m17s
	  kube-system                 kindnet-99b2q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m19s
	  kube-system                 kube-apiserver-ha-564251-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-controller-manager-ha-564251-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-proxy-8c6vn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-scheduler-ha-564251-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-vip-ha-564251-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x7 over 6m19s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           6m1s                   node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  NodeNotReady             2m45s                  node-controller  Node ha-564251-m02 status is now: NodeNotReady
	
	
	Name:               ha-564251-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_43_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:43:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:48:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:44:28 +0000   Sun, 21 Jul 2024 23:43:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-564251-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 edaed2175ae2489883b557af269e9263
	  System UUID:                edaed217-5ae2-4898-83b5-57af269e9263
	  Boot ID:                    d9bd97ea-d279-48c4-b4cf-847e1fb7c8fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s2cqd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 etcd-ha-564251-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-s2t8k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-ha-564251-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-ha-564251-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-2xlks                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-ha-564251-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-vip-ha-564251-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-564251-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal  RegisteredNode           4m50s                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	
	
	Name:               ha-564251-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_44_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:44:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:48:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:45:02 +0000   Sun, 21 Jul 2024 23:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    ha-564251-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf784ac43fb240a1b428a7ebf8ca34bc
	  System UUID:                cf784ac4-3fb2-40a1-b428-a7ebf8ca34bc
	  Boot ID:                    344142ed-1d06-4520-a624-7c3d556f224c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mfjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-proxy-lv5zw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal  NodeReady                3m44s                kubelet          Node ha-564251-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul21 23:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050509] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.420656] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.747762] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.566670] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul21 23:41] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.053909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055459] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.166215] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.145388] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268301] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.918090] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.419554] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.062251] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.216979] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075586] kauditd_printk_skb: 79 callbacks suppressed
	[ +11.003747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.099946] kauditd_printk_skb: 34 callbacks suppressed
	[Jul21 23:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7] <==
	{"level":"warn","ts":"2024-07-21T23:48:35.553214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.580917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.585491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.586689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.591765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.596639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.606883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.613321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.619739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.623356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.626622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.633731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.640199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.651309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.657051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.665019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.675656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.681929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.688628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.689637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.692778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.69584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.700954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.706984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:48:35.714182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:48:35 up 7 min,  0 users,  load average: 0.07, 0.22, 0.13
	Linux ha-564251 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5] <==
	I0721 23:47:56.159748       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:48:06.159178       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:48:06.159211       1 main.go:299] handling current node
	I0721 23:48:06.159226       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:48:06.159231       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:48:06.159368       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:48:06.159374       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:48:06.159510       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:48:06.159531       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:48:16.153361       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:48:16.153419       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:48:16.153693       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:48:16.153725       1 main.go:299] handling current node
	I0721 23:48:16.153743       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:48:16.153749       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:48:16.153834       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:48:16.153862       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:48:26.153337       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:48:26.153434       1 main.go:299] handling current node
	I0721 23:48:26.153466       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:48:26.153500       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:48:26.153724       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:48:26.153760       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:48:26.153830       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:48:26.153848       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed] <==
	I0721 23:41:15.479862       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0721 23:41:15.486032       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.91]
	I0721 23:41:15.487091       1 controller.go:615] quota admission added evaluator for: endpoints
	I0721 23:41:15.491154       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0721 23:41:15.828239       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0721 23:41:20.136856       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0721 23:41:20.154190       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0721 23:41:20.167388       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0721 23:41:30.134929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0721 23:41:30.239978       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0721 23:43:59.080244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51542: use of closed network connection
	E0721 23:43:59.272470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51554: use of closed network connection
	E0721 23:43:59.450298       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51582: use of closed network connection
	E0721 23:43:59.626286       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51610: use of closed network connection
	E0721 23:43:59.804539       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51638: use of closed network connection
	E0721 23:43:59.995510       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51658: use of closed network connection
	E0721 23:44:00.179899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51682: use of closed network connection
	E0721 23:44:00.350828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51700: use of closed network connection
	E0721 23:44:00.532002       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51718: use of closed network connection
	E0721 23:44:00.822858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51748: use of closed network connection
	E0721 23:44:01.005207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51764: use of closed network connection
	E0721 23:44:01.174041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51782: use of closed network connection
	E0721 23:44:01.339015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51802: use of closed network connection
	E0721 23:44:01.520023       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51812: use of closed network connection
	E0721 23:44:01.685538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51822: use of closed network connection
	
	
	==> kube-controller-manager [17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3] <==
	E0721 23:43:26.986885       1 certificate_controller.go:146] Sync csr-2pqtc failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2pqtc": the object has been modified; please apply your changes to the latest version and try again
	I0721 23:43:27.098942       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-564251-m03\" does not exist"
	I0721 23:43:27.115955       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-564251-m03" podCIDRs=["10.244.2.0/24"]
	I0721 23:43:29.542084       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m03"
	I0721 23:43:54.881326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.614392ms"
	I0721 23:43:54.914728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.253805ms"
	I0721 23:43:55.149752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.877996ms"
	I0721 23:43:55.362258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.313649ms"
	I0721 23:43:55.376751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.404727ms"
	I0721 23:43:55.377159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.648µs"
	I0721 23:43:56.791808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.097µs"
	I0721 23:43:57.035226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.56µs"
	I0721 23:43:58.288611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.371131ms"
	E0721 23:43:58.288939       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0721 23:43:58.289216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.28µs"
	I0721 23:43:58.294312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.946µs"
	I0721 23:43:58.663010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.119717ms"
	I0721 23:43:58.663248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.357µs"
	I0721 23:44:31.700464       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-564251-m04\" does not exist"
	I0721 23:44:31.759239       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-564251-m04" podCIDRs=["10.244.3.0/24"]
	I0721 23:44:34.568264       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m04"
	I0721 23:44:51.855822       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-564251-m04"
	I0721 23:45:50.694131       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-564251-m04"
	I0721 23:45:50.735804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.833733ms"
	I0721 23:45:50.735943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.502µs"
	
	
	==> kube-proxy [777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5] <==
	I0721 23:41:31.760987       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:41:31.776156       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.91"]
	I0721 23:41:31.806920       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:41:31.806992       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:41:31.807008       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:41:31.809771       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:41:31.810322       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:41:31.810347       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:41:31.811902       1 config.go:192] "Starting service config controller"
	I0721 23:41:31.812086       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:41:31.812738       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:41:31.812771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:41:31.813817       1 config.go:319] "Starting node config controller"
	I0721 23:41:31.813839       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:41:31.912496       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:41:31.913157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:41:31.913966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624] <==
	E0721 23:43:27.174790       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t48tm\": pod kindnet-t48tm is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-t48tm" node="ha-564251-m03"
	E0721 23:43:27.174853       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2xlks\": pod kube-proxy-2xlks is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2xlks" node="ha-564251-m03"
	E0721 23:43:27.181792       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 67ba351a-20c6-442f-bc11-d1363ee387f7(kube-system/kube-proxy-2xlks) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2xlks"
	E0721 23:43:27.181860       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2xlks\": pod kube-proxy-2xlks is already assigned to node \"ha-564251-m03\"" pod="kube-system/kube-proxy-2xlks"
	I0721 23:43:27.181927       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2xlks" node="ha-564251-m03"
	E0721 23:43:27.181736       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aba6c570-6264-44fd-8775-e6d340bebd1d(kube-system/kindnet-t48tm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-t48tm"
	E0721 23:43:27.184253       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t48tm\": pod kindnet-t48tm is already assigned to node \"ha-564251-m03\"" pod="kube-system/kindnet-t48tm"
	I0721 23:43:27.186380       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t48tm" node="ha-564251-m03"
	E0721 23:43:27.255933       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s2t8k\": pod kindnet-s2t8k is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-s2t8k" node="ha-564251-m03"
	E0721 23:43:27.255987       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 96cd07e3-b249-4f1b-a6c0-6e2bc2791df1(kube-system/kindnet-s2t8k) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-s2t8k"
	E0721 23:43:27.256006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s2t8k\": pod kindnet-s2t8k is already assigned to node \"ha-564251-m03\"" pod="kube-system/kindnet-s2t8k"
	I0721 23:43:27.256025       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s2t8k" node="ha-564251-m03"
	E0721 23:43:27.256220       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hks9x\": pod kube-proxy-hks9x is already assigned to node \"ha-564251-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hks9x" node="ha-564251-m03"
	E0721 23:43:27.256303       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 39d8a046-3214-49a6-9e1e-044e7ef50834(kube-system/kube-proxy-hks9x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hks9x"
	E0721 23:43:27.256392       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hks9x\": pod kube-proxy-hks9x is already assigned to node \"ha-564251-m03\"" pod="kube-system/kube-proxy-hks9x"
	I0721 23:43:27.258116       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hks9x" node="ha-564251-m03"
	E0721 23:43:55.105186       1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-s4brh is already present in the active queue" pod="default/busybox-fc5497c4f-s4brh"
	E0721 23:44:31.772650       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lv5zw\": pod kube-proxy-lv5zw is already assigned to node \"ha-564251-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lv5zw" node="ha-564251-m04"
	E0721 23:44:31.773002       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e18641cd-1554-44c4-8fe3-e0a8903f9a46(kube-system/kube-proxy-lv5zw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lv5zw"
	E0721 23:44:31.773145       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lv5zw\": pod kube-proxy-lv5zw is already assigned to node \"ha-564251-m04\"" pod="kube-system/kube-proxy-lv5zw"
	I0721 23:44:31.773430       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lv5zw" node="ha-564251-m04"
	E0721 23:44:31.879012       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lg2lc\": pod kindnet-lg2lc is already assigned to node \"ha-564251-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lg2lc" node="ha-564251-m04"
	E0721 23:44:31.879975       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 84debccc-791a-4de4-b195-15eb22ba5a1c(kube-system/kindnet-lg2lc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lg2lc"
	E0721 23:44:31.880277       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lg2lc\": pod kindnet-lg2lc is already assigned to node \"ha-564251-m04\"" pod="kube-system/kindnet-lg2lc"
	I0721 23:44:31.880360       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lg2lc" node="ha-564251-m04"
	
	
	==> kubelet <==
	Jul 21 23:44:20 ha-564251 kubelet[1363]: E0721 23:44:20.022412    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:44:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:44:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:45:20 ha-564251 kubelet[1363]: E0721 23:45:20.022211    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:45:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:45:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:46:20 ha-564251 kubelet[1363]: E0721 23:46:20.023977    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:46:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:46:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:47:20 ha-564251 kubelet[1363]: E0721 23:47:20.021242    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:47:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:47:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:48:20 ha-564251 kubelet[1363]: E0721 23:48:20.020731    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:48:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:48:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:48:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:48:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-564251 -n ha-564251
helpers_test.go:261: (dbg) Run:  kubectl --context ha-564251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-564251 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-564251 -v=7 --alsologtostderr
E0721 23:49:55.172920   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:50:22.857417   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-564251 -v=7 --alsologtostderr: exit status 82 (2m1.797325927s)

                                                
                                                
-- stdout --
	* Stopping node "ha-564251-m04"  ...
	* Stopping node "ha-564251-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:48:37.125371   29004 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:48:37.125596   29004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:37.125605   29004 out.go:304] Setting ErrFile to fd 2...
	I0721 23:48:37.125609   29004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:48:37.125776   29004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:48:37.125996   29004 out.go:298] Setting JSON to false
	I0721 23:48:37.126078   29004 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:37.126430   29004 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:37.126510   29004 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:48:37.126708   29004 mustload.go:65] Loading cluster: ha-564251
	I0721 23:48:37.126841   29004 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:48:37.126881   29004 stop.go:39] StopHost: ha-564251-m04
	I0721 23:48:37.127209   29004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:37.127245   29004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:37.143099   29004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0721 23:48:37.143597   29004 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:37.144194   29004 main.go:141] libmachine: Using API Version  1
	I0721 23:48:37.144222   29004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:37.144536   29004 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:37.147316   29004 out.go:177] * Stopping node "ha-564251-m04"  ...
	I0721 23:48:37.148644   29004 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0721 23:48:37.148681   29004 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:48:37.148912   29004 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0721 23:48:37.148950   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:48:37.151885   29004 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:37.152346   29004 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:44:15 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:48:37.152378   29004 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:48:37.152498   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:48:37.152693   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:48:37.152916   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:48:37.153127   29004 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:48:37.232524   29004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0721 23:48:37.284834   29004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0721 23:48:37.336781   29004 main.go:141] libmachine: Stopping "ha-564251-m04"...
	I0721 23:48:37.336809   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:37.338283   29004 main.go:141] libmachine: (ha-564251-m04) Calling .Stop
	I0721 23:48:37.341173   29004 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 0/120
	I0721 23:48:38.474905   29004 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:48:38.476174   29004 main.go:141] libmachine: Machine "ha-564251-m04" was stopped.
	I0721 23:48:38.476198   29004 stop.go:75] duration metric: took 1.327579164s to stop
	I0721 23:48:38.476228   29004 stop.go:39] StopHost: ha-564251-m03
	I0721 23:48:38.476616   29004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:48:38.476668   29004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:48:38.490969   29004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0721 23:48:38.491345   29004 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:48:38.491833   29004 main.go:141] libmachine: Using API Version  1
	I0721 23:48:38.491858   29004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:48:38.492154   29004 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:48:38.493882   29004 out.go:177] * Stopping node "ha-564251-m03"  ...
	I0721 23:48:38.495097   29004 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0721 23:48:38.495118   29004 main.go:141] libmachine: (ha-564251-m03) Calling .DriverName
	I0721 23:48:38.495346   29004 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0721 23:48:38.495368   29004 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHHostname
	I0721 23:48:38.498192   29004 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:38.498694   29004 main.go:141] libmachine: (ha-564251-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:e6:b3", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:42:55 +0000 UTC Type:0 Mac:52:54:00:9c:e6:b3 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-564251-m03 Clientid:01:52:54:00:9c:e6:b3}
	I0721 23:48:38.498728   29004 main.go:141] libmachine: (ha-564251-m03) DBG | domain ha-564251-m03 has defined IP address 192.168.39.89 and MAC address 52:54:00:9c:e6:b3 in network mk-ha-564251
	I0721 23:48:38.498826   29004 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHPort
	I0721 23:48:38.498991   29004 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHKeyPath
	I0721 23:48:38.499123   29004 main.go:141] libmachine: (ha-564251-m03) Calling .GetSSHUsername
	I0721 23:48:38.499238   29004 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m03/id_rsa Username:docker}
	I0721 23:48:38.580811   29004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0721 23:48:38.633242   29004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0721 23:48:38.686691   29004 main.go:141] libmachine: Stopping "ha-564251-m03"...
	I0721 23:48:38.686716   29004 main.go:141] libmachine: (ha-564251-m03) Calling .GetState
	I0721 23:48:38.688265   29004 main.go:141] libmachine: (ha-564251-m03) Calling .Stop
	I0721 23:48:38.691893   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 0/120
	I0721 23:48:39.693340   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 1/120
	I0721 23:48:40.694856   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 2/120
	I0721 23:48:41.696242   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 3/120
	I0721 23:48:42.697663   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 4/120
	I0721 23:48:43.699614   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 5/120
	I0721 23:48:44.701109   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 6/120
	I0721 23:48:45.702438   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 7/120
	I0721 23:48:46.703966   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 8/120
	I0721 23:48:47.705355   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 9/120
	I0721 23:48:48.707565   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 10/120
	I0721 23:48:49.709191   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 11/120
	I0721 23:48:50.710477   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 12/120
	I0721 23:48:51.711892   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 13/120
	I0721 23:48:52.713037   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 14/120
	I0721 23:48:53.714711   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 15/120
	I0721 23:48:54.716126   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 16/120
	I0721 23:48:55.717659   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 17/120
	I0721 23:48:56.719145   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 18/120
	I0721 23:48:57.720558   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 19/120
	I0721 23:48:58.722643   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 20/120
	I0721 23:48:59.724133   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 21/120
	I0721 23:49:00.725779   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 22/120
	I0721 23:49:01.727309   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 23/120
	I0721 23:49:02.728752   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 24/120
	I0721 23:49:03.730794   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 25/120
	I0721 23:49:04.732187   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 26/120
	I0721 23:49:05.733708   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 27/120
	I0721 23:49:06.735425   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 28/120
	I0721 23:49:07.737492   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 29/120
	I0721 23:49:08.739236   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 30/120
	I0721 23:49:09.741065   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 31/120
	I0721 23:49:10.742693   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 32/120
	I0721 23:49:11.744015   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 33/120
	I0721 23:49:12.745427   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 34/120
	I0721 23:49:13.746934   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 35/120
	I0721 23:49:14.748195   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 36/120
	I0721 23:49:15.749517   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 37/120
	I0721 23:49:16.750922   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 38/120
	I0721 23:49:17.752134   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 39/120
	I0721 23:49:18.753865   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 40/120
	I0721 23:49:19.755056   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 41/120
	I0721 23:49:20.756383   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 42/120
	I0721 23:49:21.757563   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 43/120
	I0721 23:49:22.758946   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 44/120
	I0721 23:49:23.760687   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 45/120
	I0721 23:49:24.762002   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 46/120
	I0721 23:49:25.763169   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 47/120
	I0721 23:49:26.764981   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 48/120
	I0721 23:49:27.766163   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 49/120
	I0721 23:49:28.768018   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 50/120
	I0721 23:49:29.769163   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 51/120
	I0721 23:49:30.770939   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 52/120
	I0721 23:49:31.772941   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 53/120
	I0721 23:49:32.774060   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 54/120
	I0721 23:49:33.776099   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 55/120
	I0721 23:49:34.777251   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 56/120
	I0721 23:49:35.778922   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 57/120
	I0721 23:49:36.780088   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 58/120
	I0721 23:49:37.781269   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 59/120
	I0721 23:49:38.783026   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 60/120
	I0721 23:49:39.784237   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 61/120
	I0721 23:49:40.785603   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 62/120
	I0721 23:49:41.787071   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 63/120
	I0721 23:49:42.789347   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 64/120
	I0721 23:49:43.791177   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 65/120
	I0721 23:49:44.792552   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 66/120
	I0721 23:49:45.793985   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 67/120
	I0721 23:49:46.795467   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 68/120
	I0721 23:49:47.797043   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 69/120
	I0721 23:49:48.798882   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 70/120
	I0721 23:49:49.800178   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 71/120
	I0721 23:49:50.801469   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 72/120
	I0721 23:49:51.802920   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 73/120
	I0721 23:49:52.804142   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 74/120
	I0721 23:49:53.805944   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 75/120
	I0721 23:49:54.807265   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 76/120
	I0721 23:49:55.808636   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 77/120
	I0721 23:49:56.810078   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 78/120
	I0721 23:49:57.811496   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 79/120
	I0721 23:49:58.813396   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 80/120
	I0721 23:49:59.815129   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 81/120
	I0721 23:50:00.816495   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 82/120
	I0721 23:50:01.817854   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 83/120
	I0721 23:50:02.819531   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 84/120
	I0721 23:50:03.821114   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 85/120
	I0721 23:50:04.822377   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 86/120
	I0721 23:50:05.823647   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 87/120
	I0721 23:50:06.824935   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 88/120
	I0721 23:50:07.826397   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 89/120
	I0721 23:50:08.828192   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 90/120
	I0721 23:50:09.829875   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 91/120
	I0721 23:50:10.831177   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 92/120
	I0721 23:50:11.832563   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 93/120
	I0721 23:50:12.834584   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 94/120
	I0721 23:50:13.836547   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 95/120
	I0721 23:50:14.837761   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 96/120
	I0721 23:50:15.838916   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 97/120
	I0721 23:50:16.840961   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 98/120
	I0721 23:50:17.842074   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 99/120
	I0721 23:50:18.843726   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 100/120
	I0721 23:50:19.844994   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 101/120
	I0721 23:50:20.846248   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 102/120
	I0721 23:50:21.847763   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 103/120
	I0721 23:50:22.849122   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 104/120
	I0721 23:50:23.850713   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 105/120
	I0721 23:50:24.851952   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 106/120
	I0721 23:50:25.853503   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 107/120
	I0721 23:50:26.854776   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 108/120
	I0721 23:50:27.856971   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 109/120
	I0721 23:50:28.858463   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 110/120
	I0721 23:50:29.860979   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 111/120
	I0721 23:50:30.862350   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 112/120
	I0721 23:50:31.863748   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 113/120
	I0721 23:50:32.865870   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 114/120
	I0721 23:50:33.867477   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 115/120
	I0721 23:50:34.868793   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 116/120
	I0721 23:50:35.870288   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 117/120
	I0721 23:50:36.871635   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 118/120
	I0721 23:50:37.872985   29004 main.go:141] libmachine: (ha-564251-m03) Waiting for machine to stop 119/120
	I0721 23:50:38.873913   29004 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0721 23:50:38.873964   29004 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0721 23:50:38.875844   29004 out.go:177] 
	W0721 23:50:38.877378   29004 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0721 23:50:38.877395   29004 out.go:239] * 
	* 
	W0721 23:50:38.879662   29004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:50:38.880924   29004 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-564251 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-564251 --wait=true -v=7 --alsologtostderr
E0721 23:52:54.281940   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:54:17.330887   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-564251 --wait=true -v=7 --alsologtostderr: (3m55.562355856s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-564251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-564251 -n ha-564251
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-564251 logs -n 25: (1.766268003s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m04 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp testdata/cp-test.txt                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m04_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03:/home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m03 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-564251 node stop m02 -v=7                                                     | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-564251 node start m02 -v=7                                                    | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-564251 -v=7                                                           | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-564251 -v=7                                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-564251 --wait=true -v=7                                                    | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:50 UTC | 21 Jul 24 23:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-564251                                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:54 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:50:38.927786   29454 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:50:38.927920   29454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:50:38.927932   29454 out.go:304] Setting ErrFile to fd 2...
	I0721 23:50:38.927938   29454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:50:38.928194   29454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:50:38.928955   29454 out.go:298] Setting JSON to false
	I0721 23:50:38.930225   29454 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1983,"bootTime":1721603856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:50:38.930320   29454 start.go:139] virtualization: kvm guest
	I0721 23:50:38.932561   29454 out.go:177] * [ha-564251] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:50:38.933858   29454 notify.go:220] Checking for updates...
	I0721 23:50:38.933880   29454 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:50:38.935112   29454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:50:38.936300   29454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:50:38.937451   29454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:50:38.938566   29454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:50:38.939834   29454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:50:38.941529   29454 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:50:38.941673   29454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:50:38.942294   29454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:50:38.942344   29454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:50:38.957011   29454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0721 23:50:38.957455   29454 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:50:38.957985   29454 main.go:141] libmachine: Using API Version  1
	I0721 23:50:38.958006   29454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:50:38.958480   29454 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:50:38.958767   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:38.995750   29454 out.go:177] * Using the kvm2 driver based on existing profile
	I0721 23:50:38.997139   29454 start.go:297] selected driver: kvm2
	I0721 23:50:38.997157   29454 start.go:901] validating driver "kvm2" against &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:50:38.997370   29454 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:50:38.997828   29454 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:50:38.997930   29454 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:50:39.012573   29454 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:50:39.013286   29454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:50:39.013367   29454 cni.go:84] Creating CNI manager for ""
	I0721 23:50:39.013381   29454 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 23:50:39.013455   29454 start.go:340] cluster config:
	{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:50:39.013624   29454 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:50:39.015529   29454 out.go:177] * Starting "ha-564251" primary control-plane node in "ha-564251" cluster
	I0721 23:50:39.016643   29454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:50:39.016677   29454 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:50:39.016686   29454 cache.go:56] Caching tarball of preloaded images
	I0721 23:50:39.016753   29454 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:50:39.016763   29454 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:50:39.016910   29454 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:50:39.017096   29454 start.go:360] acquireMachinesLock for ha-564251: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:50:39.017143   29454 start.go:364] duration metric: took 23.119µs to acquireMachinesLock for "ha-564251"
	I0721 23:50:39.017159   29454 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:50:39.017167   29454 fix.go:54] fixHost starting: 
	I0721 23:50:39.017436   29454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:50:39.017465   29454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:50:39.031101   29454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0721 23:50:39.031508   29454 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:50:39.031925   29454 main.go:141] libmachine: Using API Version  1
	I0721 23:50:39.031939   29454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:50:39.032258   29454 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:50:39.032462   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:39.032630   29454 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:50:39.034087   29454 fix.go:112] recreateIfNeeded on ha-564251: state=Running err=<nil>
	W0721 23:50:39.034107   29454 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:50:39.036757   29454 out.go:177] * Updating the running kvm2 "ha-564251" VM ...
	I0721 23:50:39.038071   29454 machine.go:94] provisionDockerMachine start ...
	I0721 23:50:39.038089   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:39.038287   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.040759   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.041153   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.041197   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.041337   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.041517   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.041663   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.041783   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.041917   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.042079   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.042088   29454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:50:39.143281   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:50:39.143312   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.143584   29454 buildroot.go:166] provisioning hostname "ha-564251"
	I0721 23:50:39.143610   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.143818   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.146563   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.147011   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.147034   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.147147   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.147327   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.147482   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.147699   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.147887   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.148096   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.148115   29454 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251 && echo "ha-564251" | sudo tee /etc/hostname
	I0721 23:50:39.260224   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:50:39.260293   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.263280   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.263740   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.263768   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.263892   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.264058   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.264218   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.264338   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.264517   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.264675   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.264689   29454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:50:39.359209   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:39.359239   29454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:50:39.359255   29454 buildroot.go:174] setting up certificates
	I0721 23:50:39.359262   29454 provision.go:84] configureAuth start
	I0721 23:50:39.359272   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.359510   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:50:39.361909   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.362240   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.362268   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.362379   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.364761   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.365197   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.365219   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.365408   29454 provision.go:143] copyHostCerts
	I0721 23:50:39.365451   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:50:39.365500   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:50:39.365513   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:50:39.365594   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:50:39.365718   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:50:39.365745   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:50:39.365754   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:50:39.365799   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:50:39.365872   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:50:39.365894   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:50:39.365900   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:50:39.365936   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:50:39.366016   29454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251 san=[127.0.0.1 192.168.39.91 ha-564251 localhost minikube]
	I0721 23:50:39.434031   29454 provision.go:177] copyRemoteCerts
	I0721 23:50:39.434097   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:50:39.434119   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.436867   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.437340   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.437359   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.437538   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.437709   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.437873   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.438006   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:50:39.516887   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:50:39.516967   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:50:39.540045   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:50:39.540123   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0721 23:50:39.562573   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:50:39.562706   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:50:39.585694   29454 provision.go:87] duration metric: took 226.419199ms to configureAuth
	I0721 23:50:39.585727   29454 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:50:39.586022   29454 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:50:39.586125   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.588607   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.589054   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.589094   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.589249   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.589430   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.589564   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.589723   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.589848   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.590007   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.590023   29454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:52:10.308433   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:52:10.308467   29454 machine.go:97] duration metric: took 1m31.270382022s to provisionDockerMachine
	I0721 23:52:10.308484   29454 start.go:293] postStartSetup for "ha-564251" (driver="kvm2")
	I0721 23:52:10.308499   29454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:52:10.308533   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.308974   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:52:10.309004   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.311870   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.312338   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.312367   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.312457   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.312631   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.312781   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.312929   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.394011   29454 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:52:10.397975   29454 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:52:10.398007   29454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:52:10.398081   29454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:52:10.398184   29454 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:52:10.398197   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:52:10.398279   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:52:10.407477   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:52:10.429728   29454 start.go:296] duration metric: took 121.230654ms for postStartSetup
	I0721 23:52:10.429766   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.430047   29454 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0721 23:52:10.430078   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.432773   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.433170   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.433194   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.433455   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.433634   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.433793   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.433986   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	W0721 23:52:10.512818   29454 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0721 23:52:10.512851   29454 fix.go:56] duration metric: took 1m31.495683363s for fixHost
	I0721 23:52:10.512876   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.515634   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.516025   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.516047   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.516212   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.516456   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.516602   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.516748   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.516928   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:52:10.517088   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:52:10.517099   29454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:52:10.615017   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605930.576237422
	
	I0721 23:52:10.615038   29454 fix.go:216] guest clock: 1721605930.576237422
	I0721 23:52:10.615048   29454 fix.go:229] Guest: 2024-07-21 23:52:10.576237422 +0000 UTC Remote: 2024-07-21 23:52:10.512858408 +0000 UTC m=+91.621596507 (delta=63.379014ms)
	I0721 23:52:10.615090   29454 fix.go:200] guest clock delta is within tolerance: 63.379014ms
	I0721 23:52:10.615098   29454 start.go:83] releasing machines lock for "ha-564251", held for 1m31.597943082s
	I0721 23:52:10.615129   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.615404   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:52:10.617866   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.618197   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.618223   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.618380   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.618900   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.619070   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.619171   29454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:52:10.619236   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.619261   29454 ssh_runner.go:195] Run: cat /version.json
	I0721 23:52:10.619285   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.621835   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622018   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622227   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.622251   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622395   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.622488   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.622523   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622526   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.622711   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.622744   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.622904   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.622898   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.623062   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.623208   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.695361   29454 ssh_runner.go:195] Run: systemctl --version
	I0721 23:52:10.722582   29454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:52:10.885900   29454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:52:10.891576   29454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:52:10.891669   29454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:52:10.900250   29454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:52:10.900270   29454 start.go:495] detecting cgroup driver to use...
	I0721 23:52:10.900345   29454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:52:10.915907   29454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:52:10.929949   29454 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:52:10.930013   29454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:52:10.943291   29454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:52:10.956467   29454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:52:11.106012   29454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:52:11.257298   29454 docker.go:233] disabling docker service ...
	I0721 23:52:11.257368   29454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:52:11.272246   29454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:52:11.284868   29454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:52:11.428586   29454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:52:11.569523   29454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:52:11.582551   29454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:52:11.600786   29454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:52:11.600840   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.610507   29454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:52:11.610563   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.619982   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.629360   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.638694   29454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:52:11.648202   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.657359   29454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.667805   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.677216   29454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:52:11.685892   29454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:52:11.694393   29454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:52:11.834530   29454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:52:12.090295   29454 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:52:12.090357   29454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:52:12.096587   29454 start.go:563] Will wait 60s for crictl version
	I0721 23:52:12.096635   29454 ssh_runner.go:195] Run: which crictl
	I0721 23:52:12.100166   29454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:52:12.133969   29454 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:52:12.134042   29454 ssh_runner.go:195] Run: crio --version
	I0721 23:52:12.161370   29454 ssh_runner.go:195] Run: crio --version
	I0721 23:52:12.190068   29454 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:52:12.191187   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:52:12.193888   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:12.194234   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:12.194260   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:12.194460   29454 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:52:12.199314   29454 kubeadm.go:883] updating cluster {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:52:12.199448   29454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:52:12.199487   29454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:52:12.240466   29454 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:52:12.240488   29454 crio.go:433] Images already preloaded, skipping extraction
	I0721 23:52:12.240541   29454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:52:12.275341   29454 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:52:12.275366   29454 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:52:12.275376   29454 kubeadm.go:934] updating node { 192.168.39.91 8443 v1.30.3 crio true true} ...
	I0721 23:52:12.275517   29454 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:52:12.275615   29454 ssh_runner.go:195] Run: crio config
	I0721 23:52:12.319962   29454 cni.go:84] Creating CNI manager for ""
	I0721 23:52:12.319982   29454 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 23:52:12.319993   29454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:52:12.320017   29454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-564251 NodeName:ha-564251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:52:12.320138   29454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-564251"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:52:12.320159   29454 kube-vip.go:115] generating kube-vip config ...
	I0721 23:52:12.320202   29454 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:52:12.331323   29454 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:52:12.331438   29454 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:52:12.331544   29454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:52:12.340623   29454 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:52:12.340681   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0721 23:52:12.349409   29454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0721 23:52:12.364290   29454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:52:12.379049   29454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0721 23:52:12.394221   29454 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:52:12.411340   29454 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:52:12.414834   29454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:52:12.561811   29454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:52:12.575683   29454 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.91
	I0721 23:52:12.575706   29454 certs.go:194] generating shared ca certs ...
	I0721 23:52:12.575723   29454 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.575856   29454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:52:12.575896   29454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:52:12.575905   29454 certs.go:256] generating profile certs ...
	I0721 23:52:12.575982   29454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:52:12.576008   29454 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579
	I0721 23:52:12.576024   29454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.89 192.168.39.254]
	I0721 23:52:12.630221   29454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 ...
	I0721 23:52:12.630251   29454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579: {Name:mkc6e7a1da999f35092b2f3a848bc5ca259ba541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.630422   29454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579 ...
	I0721 23:52:12.630433   29454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579: {Name:mk9f2b972ea584c33e0797517e5cb49f297bf5d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.630496   29454 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:52:12.630687   29454 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:52:12.630836   29454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:52:12.630850   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:52:12.630864   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:52:12.630876   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:52:12.630889   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:52:12.630900   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:52:12.630920   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:52:12.630934   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:52:12.630945   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:52:12.630997   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:52:12.631026   29454 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:52:12.631035   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:52:12.631054   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:52:12.631077   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:52:12.631099   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:52:12.631140   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:52:12.631172   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.631185   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.631196   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.631756   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:52:12.655133   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:52:12.676253   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:52:12.697396   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:52:12.721543   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0721 23:52:12.748027   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0721 23:52:12.771436   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:52:12.794513   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:52:12.817623   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:52:12.841967   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:52:12.864821   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:52:12.887780   29454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:52:12.904930   29454 ssh_runner.go:195] Run: openssl version
	I0721 23:52:12.910424   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:52:12.921655   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.925908   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.925958   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.931321   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:52:12.941183   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:52:12.951339   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.955446   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.955488   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.960792   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:52:12.971723   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:52:12.983760   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.988092   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.988135   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.994233   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:52:13.005385   29454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:52:13.009859   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0721 23:52:13.015519   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0721 23:52:13.022393   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0721 23:52:13.027778   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0721 23:52:13.033299   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0721 23:52:13.038740   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0721 23:52:13.044736   29454 kubeadm.go:392] StartCluster: {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:52:13.044834   29454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:52:13.044876   29454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:52:13.088687   29454 cri.go:89] found id: "89eb1b99a458a2c828e88677b3a08c634ba9196fb4f1877edd74daf5fc76e5c9"
	I0721 23:52:13.088706   29454 cri.go:89] found id: "56b1813870485276ccb9eb6e72a676270185dc25ccfb41c3370823d1f4ee463e"
	I0721 23:52:13.088712   29454 cri.go:89] found id: "118160f8dc93973f5b5a80cbbf84ece3aa0be9f31f5000979b1fc88a2ac1b77b"
	I0721 23:52:13.088715   29454 cri.go:89] found id: "fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6"
	I0721 23:52:13.088719   29454 cri.go:89] found id: "db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0"
	I0721 23:52:13.088723   29454 cri.go:89] found id: "d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091"
	I0721 23:52:13.088727   29454 cri.go:89] found id: "b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5"
	I0721 23:52:13.088731   29454 cri.go:89] found id: "777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5"
	I0721 23:52:13.088734   29454 cri.go:89] found id: "bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007"
	I0721 23:52:13.088741   29454 cri.go:89] found id: "22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624"
	I0721 23:52:13.088746   29454 cri.go:89] found id: "fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed"
	I0721 23:52:13.088764   29454 cri.go:89] found id: "17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3"
	I0721 23:52:13.088769   29454 cri.go:89] found id: "9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7"
	I0721 23:52:13.088772   29454 cri.go:89] found id: ""
	I0721 23:52:13.088818   29454 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.127106887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fdcb6e0-6038-4022-829e-5f60fe80849f name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.127513266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fdcb6e0-6038-4022-829e-5f60fe80849f name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.186372274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b9947b6-6ba5-4e38-9d6a-1fdae8643bf8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.186475229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b9947b6-6ba5-4e38-9d6a-1fdae8643bf8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.187923453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5d01c7a-3e51-4bc9-8f98-16601b89f58e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.188649572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606075188408414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5d01c7a-3e51-4bc9-8f98-16601b89f58e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.189788599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50372007-1dcd-45bb-8989-362c876e0386 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.189896300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50372007-1dcd-45bb-8989-362c876e0386 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.196399450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50372007-1dcd-45bb-8989-362c876e0386 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.247992577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=535006f2-2845-460d-930e-8906036be444 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.248090840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=535006f2-2845-460d-930e-8906036be444 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.249332260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=345c1751-fd87-4038-b121-dcbd171b1d5c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.249989500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606075249958155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=345c1751-fd87-4038-b121-dcbd171b1d5c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.250627582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62b73d86-e1fd-48e5-a37b-494dcfd0e969 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.250706939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62b73d86-e1fd-48e5-a37b-494dcfd0e969 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.251532602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62b73d86-e1fd-48e5-a37b-494dcfd0e969 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.273925185Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=dc835db0-4a87-4060-8286-6ed46c6a1096 name=/runtime.v1.RuntimeService/Status
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.274009662Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dc835db0-4a87-4060-8286-6ed46c6a1096 name=/runtime.v1.RuntimeService/Status
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.304825440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=528dcf9e-81bf-4ce6-ba77-dbd6478c10d7 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.304908886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=528dcf9e-81bf-4ce6-ba77-dbd6478c10d7 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.306423528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdc731dc-9950-4e1a-a33f-03295e426a79 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.306898504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606075306876112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdc731dc-9950-4e1a-a33f-03295e426a79 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.307376790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2d03df8-417d-48d7-8e56-259c7efeb6ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.307434353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2d03df8-417d-48d7-8e56-259c7efeb6ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:54:35 ha-564251 crio[3684]: time="2024-07-21 23:54:35.308038656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2d03df8-417d-48d7-8e56-259c7efeb6ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e796a289f9b2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       4                   b028ef860c1d6       storage-provisioner
	a6de206bc3bc8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   dc671cbc7a7cb       kube-controller-manager-ha-564251
	898386faaca7f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   89e61b0cd0f95       kube-apiserver-ha-564251
	ab812ef75a585       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d0c1dee700092       busybox-fc5497c4f-tvjh7
	21a08c9335f49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   b028ef860c1d6       storage-provisioner
	7b882287a09c8       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   988f31d520e54       kube-vip-ha-564251
	e68ac889a48af       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   49f01a734201f       kindnet-jz5md
	a199590ae4534       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   e31ffc81cb2de       kube-proxy-srpl8
	343ecaaccece7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   9abc6400cbe83       coredns-7db6d8ff4d-bsbzk
	a10b7adf0b1d4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0ece516c62293       coredns-7db6d8ff4d-f4lqn
	38f8d5dac7577       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   9990219a19be2       etcd-ha-564251
	8944ced8f719b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   7eafbd4df408b       kube-scheduler-ha-564251
	7f230e8efe835       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   89e61b0cd0f95       kube-apiserver-ha-564251
	0effec1b1aa8c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   dc671cbc7a7cb       kube-controller-manager-ha-564251
	3769ca1c0d189       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4399dac80b572       busybox-fc5497c4f-tvjh7
	fd88a6f6b66dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   60549b9fc09ba       coredns-7db6d8ff4d-bsbzk
	d708ea287a4e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   3cf5796c9ffab       coredns-7db6d8ff4d-f4lqn
	b2afbf6c4dfa0       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   8c7a9ed52b5b4       kindnet-jz5md
	777c36438bf0f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   997932c064fbe       kube-proxy-srpl8
	22bd5cac142d6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   2d4165e9b2df2       kube-scheduler-ha-564251
	9863a1f5cf334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   bc6861a50f8f6       etcd-ha-564251
	
	
	==> coredns [343ecaaccece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5] <==
	Trace[1161392864]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:52:36.683)
	Trace[1161392864]: [10.001414785s] [10.001414785s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1808281943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Jul-2024 23:52:26.746) (total time: 10000ms):
	Trace[1808281943]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (23:52:36.747)
	Trace[1808281943]: [10.000995033s] [10.000995033s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[113173568]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Jul-2024 23:52:27.174) (total time: 10001ms):
	Trace[113173568]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:52:37.175)
	Trace[113173568]: [10.00151339s] [10.00151339s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091] <==
	[INFO] 10.244.1.2:34188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014651s
	[INFO] 10.244.1.2:41501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011577s
	[INFO] 10.244.1.2:34022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084216s
	[INFO] 10.244.2.2:36668 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118928s
	[INFO] 10.244.0.4:60553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129219s
	[INFO] 10.244.0.4:34229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158514s
	[INFO] 10.244.0.4:35099 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013345s
	[INFO] 10.244.1.2:60128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204062s
	[INFO] 10.244.1.2:51220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169537s
	[INFO] 10.244.1.2:50118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000261213s
	[INFO] 10.244.2.2:42616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012241s
	[INFO] 10.244.2.2:51984 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223089s
	[INFO] 10.244.2.2:60866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100348s
	[INFO] 10.244.0.4:38494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093863s
	[INFO] 10.244.0.4:56964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080856s
	[INFO] 10.244.0.4:37413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172185s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1909&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1909&timeout=5m38s&timeoutSeconds=338&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6] <==
	[INFO] 10.244.1.2:47400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171001s
	[INFO] 10.244.1.2:51399 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162839s
	[INFO] 10.244.2.2:46920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139973s
	[INFO] 10.244.2.2:45334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001092856s
	[INFO] 10.244.0.4:53396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109772s
	[INFO] 10.244.0.4:54634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652249s
	[INFO] 10.244.0.4:45490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147442s
	[INFO] 10.244.0.4:46915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090743s
	[INFO] 10.244.0.4:60906 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127948s
	[INFO] 10.244.0.4:36593 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118548s
	[INFO] 10.244.1.2:59477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105785s
	[INFO] 10.244.2.2:48044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138738s
	[INFO] 10.244.2.2:48209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093024s
	[INFO] 10.244.2.2:54967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089783s
	[INFO] 10.244.0.4:47425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088831s
	[INFO] 10.244.1.2:59455 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131678s
	[INFO] 10.244.2.2:60606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089108s
	[INFO] 10.244.0.4:46173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097876s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1909&timeout=8m19s&timeoutSeconds=499&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-564251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-564251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83877339e2d74557b5e6d75fd0a30c5b
	  System UUID:                83877339-e2d7-4557-b5e6-d75fd0a30c5b
	  Boot ID:                    4d4acbc6-fdf1-4a14-b622-8bad377224dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvjh7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-bsbzk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-f4lqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-564251                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-jz5md                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-564251             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-564251    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-srpl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-564251             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-564251                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 91s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   NodeReady                12m                kubelet          Node ha-564251 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Warning  ContainerGCFailed        3m15s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           81s                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           80s                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           26s                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	
	
	Name:               ha-564251-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:42:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:54:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:53:33 +0000   Sun, 21 Jul 2024 23:53:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:53:33 +0000   Sun, 21 Jul 2024 23:53:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:53:33 +0000   Sun, 21 Jul 2024 23:53:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:53:33 +0000   Sun, 21 Jul 2024 23:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-564251-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8db54debc3f459a84145497caff8bc1
	  System UUID:                e8db54de-bc3f-459a-8414-5497caff8bc1
	  Boot ID:                    06f34f0d-9e5a-4914-968f-a7b4b9481516
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2jrmb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-564251-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-99b2q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-564251-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-564251-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8c6vn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-564251-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-564251-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  NodeNotReady             8m45s                node-controller  Node ha-564251-m02 status is now: NodeNotReady
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           80s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           26s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	
	
	Name:               ha-564251-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_43_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:43:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:54:11 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:54:11 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:54:11 +0000   Sun, 21 Jul 2024 23:43:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:54:11 +0000   Sun, 21 Jul 2024 23:43:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-564251-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 edaed2175ae2489883b557af269e9263
	  System UUID:                edaed217-5ae2-4898-83b5-57af269e9263
	  Boot ID:                    ed5730ed-08c8-4771-987a-1bf361ffbcc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s2cqd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-564251-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-s2t8k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-564251-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-564251-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-2xlks                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-564251-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-564251-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-564251-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-564251-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal   RegisteredNode           80s                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node ha-564251-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node ha-564251-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node ha-564251-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-564251-m03 has been rebooted, boot id: ed5730ed-08c8-4771-987a-1bf361ffbcc9
	  Normal   RegisteredNode           26s                node-controller  Node ha-564251-m03 event: Registered Node ha-564251-m03 in Controller
	
	
	Name:               ha-564251-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_44_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:44:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:54:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:54:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:54:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:54:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:54:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    ha-564251-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf784ac43fb240a1b428a7ebf8ca34bc
	  System UUID:                cf784ac4-3fb2-40a1-b428-a7ebf8ca34bc
	  Boot ID:                    cafa4aaa-1679-45a7-8af0-acf5d1fb4d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6mfjp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-lv5zw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 9m59s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   NodeReady                9m44s              kubelet          Node ha-564251-m04 status is now: NodeReady
	  Normal   RegisteredNode           81s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           80s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   NodeNotReady             41s                node-controller  Node ha-564251-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           26s                node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-564251-m04 has been rebooted, boot id: cafa4aaa-1679-45a7-8af0-acf5d1fb4d0b
	  Normal   NodeReady                8s                 kubelet          Node ha-564251-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul21 23:41] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.053909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055459] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.166215] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.145388] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268301] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.918090] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.419554] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.062251] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.216979] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075586] kauditd_printk_skb: 79 callbacks suppressed
	[ +11.003747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.099946] kauditd_printk_skb: 34 callbacks suppressed
	[Jul21 23:42] kauditd_printk_skb: 26 callbacks suppressed
	[Jul21 23:52] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +0.150437] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.175334] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.144219] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.262608] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.725195] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +4.987058] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.091585] kauditd_printk_skb: 85 callbacks suppressed
	[ +36.550376] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f] <==
	{"level":"warn","ts":"2024-07-21T23:53:35.326649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:53:35.331511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3a19c1a50e8a825c","from":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-21T23:53:35.457796Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:35.457852Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:38.835623Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:38.835801Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:39.45932Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:39.459376Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:43.461246Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:43.461392Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:43.836722Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:43.836852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:47.463079Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.89:2380/version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:47.463134Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"168ed4a7c6431682","error":"Get \"https://192.168.39.89:2380/version\": dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:48.83793Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-21T23:53:48.838107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"168ed4a7c6431682","rtt":"0s","error":"dial tcp 192.168.39.89:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-21T23:53:50.812901Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:53:50.813077Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:53:50.813988Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:53:50.824496Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3a19c1a50e8a825c","to":"168ed4a7c6431682","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-21T23:53:50.824631Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:53:50.84771Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3a19c1a50e8a825c","to":"168ed4a7c6431682","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-21T23:53:50.847816Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:05.883913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.806023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:54:05.884109Z","caller":"traceutil/trace.go:171","msg":"trace[401042962] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2397; }","duration":"127.055922ms","start":"2024-07-21T23:54:05.757019Z","end":"2024-07-21T23:54:05.884075Z","steps":["trace[401042962] 'count revisions from in-memory index tree'  (duration: 125.872969ms)"],"step_count":1}
	
	
	==> etcd [9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7] <==
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-21T23:50:39.707492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.616579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-21T23:50:39.707541Z","caller":"traceutil/trace.go:171","msg":"trace[242247877] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"165.695098ms","start":"2024-07-21T23:50:39.541814Z","end":"2024-07-21T23:50:39.707509Z","steps":["trace[242247877] 'agreement among raft nodes before linearized reading'  (duration: 165.637008ms)"],"step_count":1}
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-21T23:50:39.846465Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.91:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-21T23:50:39.846661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.91:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-21T23:50:39.846792Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3a19c1a50e8a825c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-21T23:50:39.847018Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847065Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.84709Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847196Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847314Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847466Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847509Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847625Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847668Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847923Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848087Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848101Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.851698Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-07-21T23:50:39.851844Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-07-21T23:50:39.851876Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-564251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.91:2380"],"advertise-client-urls":["https://192.168.39.91:2379"]}
	
	
	==> kernel <==
	 23:54:36 up 13 min,  0 users,  load average: 0.37, 0.45, 0.28
	Linux ha-564251 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5] <==
	I0721 23:50:16.151634       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:16.151641       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:50:16.151860       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:16.151883       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:16.151948       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:16.151968       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:50:26.157449       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:50:26.157491       1 main.go:299] handling current node
	I0721 23:50:26.157504       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:26.157509       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:50:26.157728       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:26.157750       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:26.157811       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:26.157826       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	E0721 23:50:28.864826       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1890&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0721 23:50:36.151019       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:36.151076       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:36.151228       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:36.151248       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:50:36.151328       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:50:36.151347       1 main.go:299] handling current node
	I0721 23:50:36.151359       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:36.151364       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	W0721 23:50:38.688200       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0721 23:50:38.688651       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kindnet [e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e] <==
	I0721 23:53:59.157904       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:54:09.162049       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:54:09.162173       1 main.go:299] handling current node
	I0721 23:54:09.162208       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:54:09.162233       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:54:09.162404       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:54:09.162469       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:54:09.162663       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:54:09.162731       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:54:19.156461       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:54:19.156512       1 main.go:299] handling current node
	I0721 23:54:19.156538       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:54:19.156545       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:54:19.156758       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:54:19.156790       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:54:19.156870       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:54:19.156876       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:54:29.156584       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:54:29.156626       1 main.go:299] handling current node
	I0721 23:54:29.156642       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:54:29.156652       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:54:29.156813       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:54:29.156834       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:54:29.156929       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:54:29.156949       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146] <==
	I0721 23:52:18.649439       1 options.go:221] external host was not specified, using 192.168.39.91
	I0721 23:52:18.666065       1 server.go:148] Version: v1.30.3
	I0721 23:52:18.666110       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:52:18.996220       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0721 23:52:19.004685       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0721 23:52:19.011462       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0721 23:52:19.011501       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0721 23:52:19.013787       1 instance.go:299] Using reconciler: lease
	W0721 23:52:38.993828       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0721 23:52:38.995843       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0721 23:52:39.014645       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a] <==
	I0721 23:53:01.129364       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0721 23:53:01.093532       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0721 23:53:01.191292       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0721 23:53:01.191323       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0721 23:53:01.194012       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0721 23:53:01.194206       1 shared_informer.go:320] Caches are synced for configmaps
	I0721 23:53:01.201817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0721 23:53:01.201973       1 aggregator.go:165] initial CRD sync complete...
	I0721 23:53:01.202071       1 autoregister_controller.go:141] Starting autoregister controller
	I0721 23:53:01.202096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0721 23:53:01.202119       1 cache.go:39] Caches are synced for autoregister controller
	W0721 23:53:01.206738       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.89]
	I0721 23:53:01.248825       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0721 23:53:01.248971       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0721 23:53:01.249013       1 policy_source.go:224] refreshing policies
	I0721 23:53:01.289715       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0721 23:53:01.294214       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0721 23:53:01.295242       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0721 23:53:01.298162       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0721 23:53:01.308743       1 controller.go:615] quota admission added evaluator for: endpoints
	I0721 23:53:01.316447       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0721 23:53:01.320652       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0721 23:53:02.100131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0721 23:53:02.543027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.89 192.168.39.91]
	W0721 23:53:12.541409       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.91]
	
	
	==> kube-controller-manager [0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102] <==
	I0721 23:52:19.303727       1 serving.go:380] Generated self-signed cert in-memory
	I0721 23:52:19.971008       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0721 23:52:19.971121       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:52:19.972595       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0721 23:52:19.972711       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0721 23:52:19.972711       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0721 23:52:19.972857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0721 23:52:40.021466       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.91:8443/healthz\": dial tcp 192.168.39.91:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788] <==
	I0721 23:53:14.130634       1 shared_informer.go:320] Caches are synced for endpoint
	I0721 23:53:14.175899       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0721 23:53:14.179785       1 shared_informer.go:320] Caches are synced for resource quota
	I0721 23:53:14.206935       1 shared_informer.go:320] Caches are synced for namespace
	I0721 23:53:14.231647       1 shared_informer.go:320] Caches are synced for service account
	I0721 23:53:14.333797       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251"
	I0721 23:53:14.333852       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m02"
	I0721 23:53:14.334394       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m03"
	I0721 23:53:14.334436       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-564251-m04"
	I0721 23:53:14.334998       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0721 23:53:14.617367       1 shared_informer.go:320] Caches are synced for garbage collector
	I0721 23:53:14.638143       1 shared_informer.go:320] Caches are synced for garbage collector
	I0721 23:53:14.639169       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0721 23:53:16.705818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.140902ms"
	I0721 23:53:16.705939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.866µs"
	I0721 23:53:19.020153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.954µs"
	I0721 23:53:24.556932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.498903ms"
	I0721 23:53:24.557495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.935µs"
	I0721 23:53:26.705310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.998639ms"
	I0721 23:53:26.706912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="226.654µs"
	I0721 23:53:42.207970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.690584ms"
	I0721 23:53:42.208325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.07µs"
	I0721 23:54:03.370709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.009144ms"
	I0721 23:54:03.370898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.571µs"
	I0721 23:54:27.843761       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-564251-m04"
	
	
	==> kube-proxy [777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5] <==
	E0721 23:49:22.878781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.414102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.414241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.415431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.415481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.415784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.415819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:41.631709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:41.631973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:44.702879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:44.703015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:44.702957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:44.703089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:03.134753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:03.134828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:03.134909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:03.134939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:09.278210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:09.278379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df] <==
	I0721 23:52:19.581538       1 server_linux.go:69] "Using iptables proxy"
	E0721 23:52:21.375416       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:24.447023       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:27.518325       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:33.662921       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:45.950492       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0721 23:53:03.990913       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.91"]
	I0721 23:53:04.027284       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:53:04.027389       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:53:04.027429       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:53:04.029770       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:53:04.030001       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:53:04.030025       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:53:04.031402       1 config.go:192] "Starting service config controller"
	I0721 23:53:04.031448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:53:04.031470       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:53:04.031486       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:53:04.033331       1 config.go:319] "Starting node config controller"
	I0721 23:53:04.033362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:53:04.131922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:53:04.132000       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:53:04.133451       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624] <==
	W0721 23:50:34.443283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:34.443353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0721 23:50:34.597881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:50:34.597930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:50:36.296529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0721 23:50:36.296619       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0721 23:50:37.134957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0721 23:50:37.135068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0721 23:50:37.449203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0721 23:50:37.449279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0721 23:50:37.656813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0721 23:50:37.656897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0721 23:50:37.820460       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:50:37.820538       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:50:38.041209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0721 23:50:38.041304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0721 23:50:38.127874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:38.127988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:38.209687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:38.209802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:39.054425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:39.054457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:39.376172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:39.376249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:39.693013       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99] <==
	W0721 23:52:55.585045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:55.585153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:55.700178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.91:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:55.700286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.91:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:56.408145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.91:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:56.408206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.91:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.181846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.91:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.181913       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.91:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.385594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.91:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.385731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.91:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.689779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.689841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:58.298244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:58.298331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:53:01.178257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:53:01.178334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:53:01.180865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0721 23:53:01.181312       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0721 23:53:01.181045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0721 23:53:01.181334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0721 23:53:01.181206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:53:01.181347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:53:01.181545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:53:01.181754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0721 23:53:01.525848       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 21 23:53:01 ha-564251 kubelet[1363]: I0721 23:53:01.002002    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:01 ha-564251 kubelet[1363]: E0721 23:53:01.002235    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75c1992e-23ca-41e0-b046-1b70a6f6f63a)\"" pod="kube-system/storage-provisioner" podUID="75c1992e-23ca-41e0-b046-1b70a6f6f63a"
	Jul 21 23:53:01 ha-564251 kubelet[1363]: I0721 23:53:01.309990    1363 status_manager.go:853] "Failed to get status for pod" podUID="973effc0455eb71d145acfc351605cda" pod="kube-system/kube-controller-manager-ha-564251" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 21 23:53:04 ha-564251 kubelet[1363]: E0721 23:53:04.381940    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-564251\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 21 23:53:04 ha-564251 kubelet[1363]: I0721 23:53:04.382347    1363 status_manager.go:853] "Failed to get status for pod" podUID="ebae638d-339c-4241-a5b3-ab4c766efc2f" pod="kube-system/coredns-7db6d8ff4d-f4lqn" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f4lqn\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 21 23:53:04 ha-564251 kubelet[1363]: E0721 23:53:04.382638    1363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-564251?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 21 23:53:15 ha-564251 kubelet[1363]: I0721 23:53:15.002129    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:15 ha-564251 kubelet[1363]: E0721 23:53:15.002690    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75c1992e-23ca-41e0-b046-1b70a6f6f63a)\"" pod="kube-system/storage-provisioner" podUID="75c1992e-23ca-41e0-b046-1b70a6f6f63a"
	Jul 21 23:53:20 ha-564251 kubelet[1363]: E0721 23:53:20.022919    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:53:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:53:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:53:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:53:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:53:26 ha-564251 kubelet[1363]: I0721 23:53:26.002768    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:26 ha-564251 kubelet[1363]: E0721 23:53:26.002984    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75c1992e-23ca-41e0-b046-1b70a6f6f63a)\"" pod="kube-system/storage-provisioner" podUID="75c1992e-23ca-41e0-b046-1b70a6f6f63a"
	Jul 21 23:53:39 ha-564251 kubelet[1363]: I0721 23:53:39.002448    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:42 ha-564251 kubelet[1363]: I0721 23:53:42.106833    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-tvjh7" podStartSLOduration=585.511669388 podStartE2EDuration="9m48.106798691s" podCreationTimestamp="2024-07-21 23:43:54 +0000 UTC" firstStartedPulling="2024-07-21 23:43:55.474164809 +0000 UTC m=+155.595946884" lastFinishedPulling="2024-07-21 23:43:58.069294118 +0000 UTC m=+158.191076187" observedRunningTime="2024-07-21 23:43:58.652518141 +0000 UTC m=+158.774300262" watchObservedRunningTime="2024-07-21 23:53:42.106798691 +0000 UTC m=+742.228580751"
	Jul 21 23:53:53 ha-564251 kubelet[1363]: I0721 23:53:53.002511    1363 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-564251" podUID="e865cc87-be77-43f3-bef2-4c47dbe7ffe5"
	Jul 21 23:53:53 ha-564251 kubelet[1363]: I0721 23:53:53.021981    1363 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-564251"
	Jul 21 23:54:00 ha-564251 kubelet[1363]: I0721 23:54:00.069088    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-564251" podStartSLOduration=7.069049566 podStartE2EDuration="7.069049566s" podCreationTimestamp="2024-07-21 23:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-21 23:54:00.068672158 +0000 UTC m=+760.190454237" watchObservedRunningTime="2024-07-21 23:54:00.069049566 +0000 UTC m=+760.190831640"
	Jul 21 23:54:20 ha-564251 kubelet[1363]: E0721 23:54:20.025098    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:54:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 23:54:34.840849   30819 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-5094/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-564251 -n ha-564251
helpers_test.go:261: (dbg) Run:  kubectl --context ha-564251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 stop -v=7 --alsologtostderr
E0721 23:54:55.172745   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 stop -v=7 --alsologtostderr: exit status 82 (2m0.449866671s)

                                                
                                                
-- stdout --
	* Stopping node "ha-564251-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:54:54.522700   31227 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:54:54.522954   31227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:54:54.522963   31227 out.go:304] Setting ErrFile to fd 2...
	I0721 23:54:54.522969   31227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:54:54.523145   31227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:54:54.523386   31227 out.go:298] Setting JSON to false
	I0721 23:54:54.523474   31227 mustload.go:65] Loading cluster: ha-564251
	I0721 23:54:54.523840   31227 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:54:54.523937   31227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:54:54.524111   31227 mustload.go:65] Loading cluster: ha-564251
	I0721 23:54:54.524278   31227 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:54:54.524307   31227 stop.go:39] StopHost: ha-564251-m04
	I0721 23:54:54.524687   31227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:54:54.524736   31227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:54:54.539324   31227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0721 23:54:54.539766   31227 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:54:54.540290   31227 main.go:141] libmachine: Using API Version  1
	I0721 23:54:54.540313   31227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:54:54.540625   31227 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:54:54.543142   31227 out.go:177] * Stopping node "ha-564251-m04"  ...
	I0721 23:54:54.544275   31227 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0721 23:54:54.544317   31227 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:54:54.544521   31227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0721 23:54:54.544548   31227 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:54:54.547455   31227 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:54:54.547931   31227 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:54:22 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:54:54.547953   31227 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:54:54.548189   31227 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:54:54.548374   31227 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:54:54.548522   31227 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:54:54.548690   31227 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	I0721 23:54:54.633139   31227 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0721 23:54:54.685215   31227 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0721 23:54:54.737245   31227 main.go:141] libmachine: Stopping "ha-564251-m04"...
	I0721 23:54:54.737314   31227 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:54:54.738811   31227 main.go:141] libmachine: (ha-564251-m04) Calling .Stop
	I0721 23:54:54.742051   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 0/120
	I0721 23:54:55.743460   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 1/120
	I0721 23:54:56.745098   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 2/120
	I0721 23:54:57.746478   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 3/120
	I0721 23:54:58.747886   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 4/120
	I0721 23:54:59.749852   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 5/120
	I0721 23:55:00.751289   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 6/120
	I0721 23:55:01.752683   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 7/120
	I0721 23:55:02.754133   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 8/120
	I0721 23:55:03.755645   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 9/120
	I0721 23:55:04.757752   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 10/120
	I0721 23:55:05.759095   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 11/120
	I0721 23:55:06.760329   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 12/120
	I0721 23:55:07.762254   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 13/120
	I0721 23:55:08.763615   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 14/120
	I0721 23:55:09.765695   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 15/120
	I0721 23:55:10.767325   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 16/120
	I0721 23:55:11.769365   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 17/120
	I0721 23:55:12.770467   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 18/120
	I0721 23:55:13.771793   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 19/120
	I0721 23:55:14.773734   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 20/120
	I0721 23:55:15.775290   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 21/120
	I0721 23:55:16.776515   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 22/120
	I0721 23:55:17.777675   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 23/120
	I0721 23:55:18.779050   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 24/120
	I0721 23:55:19.780686   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 25/120
	I0721 23:55:20.782137   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 26/120
	I0721 23:55:21.783421   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 27/120
	I0721 23:55:22.784690   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 28/120
	I0721 23:55:23.785945   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 29/120
	I0721 23:55:24.787936   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 30/120
	I0721 23:55:25.789059   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 31/120
	I0721 23:55:26.790253   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 32/120
	I0721 23:55:27.791588   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 33/120
	I0721 23:55:28.792982   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 34/120
	I0721 23:55:29.794909   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 35/120
	I0721 23:55:30.796988   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 36/120
	I0721 23:55:31.798356   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 37/120
	I0721 23:55:32.799601   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 38/120
	I0721 23:55:33.801650   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 39/120
	I0721 23:55:34.803862   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 40/120
	I0721 23:55:35.805761   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 41/120
	I0721 23:55:36.807390   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 42/120
	I0721 23:55:37.809203   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 43/120
	I0721 23:55:38.810769   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 44/120
	I0721 23:55:39.812025   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 45/120
	I0721 23:55:40.813230   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 46/120
	I0721 23:55:41.814510   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 47/120
	I0721 23:55:42.816545   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 48/120
	I0721 23:55:43.817719   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 49/120
	I0721 23:55:44.819717   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 50/120
	I0721 23:55:45.821331   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 51/120
	I0721 23:55:46.823401   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 52/120
	I0721 23:55:47.824926   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 53/120
	I0721 23:55:48.826128   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 54/120
	I0721 23:55:49.827693   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 55/120
	I0721 23:55:50.828916   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 56/120
	I0721 23:55:51.830721   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 57/120
	I0721 23:55:52.831827   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 58/120
	I0721 23:55:53.833172   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 59/120
	I0721 23:55:54.834397   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 60/120
	I0721 23:55:55.835686   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 61/120
	I0721 23:55:56.837349   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 62/120
	I0721 23:55:57.838791   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 63/120
	I0721 23:55:58.841023   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 64/120
	I0721 23:55:59.842871   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 65/120
	I0721 23:56:00.844992   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 66/120
	I0721 23:56:01.846469   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 67/120
	I0721 23:56:02.848036   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 68/120
	I0721 23:56:03.849302   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 69/120
	I0721 23:56:04.851421   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 70/120
	I0721 23:56:05.852995   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 71/120
	I0721 23:56:06.854834   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 72/120
	I0721 23:56:07.857240   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 73/120
	I0721 23:56:08.858350   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 74/120
	I0721 23:56:09.860229   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 75/120
	I0721 23:56:10.861452   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 76/120
	I0721 23:56:11.862708   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 77/120
	I0721 23:56:12.864090   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 78/120
	I0721 23:56:13.865452   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 79/120
	I0721 23:56:14.867722   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 80/120
	I0721 23:56:15.868967   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 81/120
	I0721 23:56:16.870341   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 82/120
	I0721 23:56:17.871648   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 83/120
	I0721 23:56:18.873110   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 84/120
	I0721 23:56:19.874895   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 85/120
	I0721 23:56:20.876114   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 86/120
	I0721 23:56:21.877248   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 87/120
	I0721 23:56:22.878635   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 88/120
	I0721 23:56:23.879752   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 89/120
	I0721 23:56:24.881754   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 90/120
	I0721 23:56:25.883452   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 91/120
	I0721 23:56:26.885468   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 92/120
	I0721 23:56:27.886832   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 93/120
	I0721 23:56:28.888093   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 94/120
	I0721 23:56:29.889821   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 95/120
	I0721 23:56:30.891153   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 96/120
	I0721 23:56:31.892580   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 97/120
	I0721 23:56:32.894698   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 98/120
	I0721 23:56:33.896083   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 99/120
	I0721 23:56:34.897864   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 100/120
	I0721 23:56:35.899214   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 101/120
	I0721 23:56:36.900280   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 102/120
	I0721 23:56:37.901698   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 103/120
	I0721 23:56:38.902829   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 104/120
	I0721 23:56:39.904167   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 105/120
	I0721 23:56:40.905471   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 106/120
	I0721 23:56:41.906592   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 107/120
	I0721 23:56:42.907870   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 108/120
	I0721 23:56:43.909072   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 109/120
	I0721 23:56:44.911055   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 110/120
	I0721 23:56:45.912418   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 111/120
	I0721 23:56:46.913585   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 112/120
	I0721 23:56:47.914889   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 113/120
	I0721 23:56:48.916200   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 114/120
	I0721 23:56:49.917831   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 115/120
	I0721 23:56:50.919291   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 116/120
	I0721 23:56:51.920684   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 117/120
	I0721 23:56:52.922385   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 118/120
	I0721 23:56:53.923858   31227 main.go:141] libmachine: (ha-564251-m04) Waiting for machine to stop 119/120
	I0721 23:56:54.924548   31227 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0721 23:56:54.924615   31227 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0721 23:56:54.926692   31227 out.go:177] 
	W0721 23:56:54.928193   31227 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0721 23:56:54.928209   31227 out.go:239] * 
	* 
	W0721 23:56:54.930352   31227 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:56:54.931719   31227 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-564251 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr: exit status 3 (18.825227559s)

                                                
                                                
-- stdout --
	ha-564251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-564251-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:56:54.973882   31656 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:56:54.974116   31656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:56:54.974123   31656 out.go:304] Setting ErrFile to fd 2...
	I0721 23:56:54.974127   31656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:56:54.974348   31656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:56:54.974545   31656 out.go:298] Setting JSON to false
	I0721 23:56:54.974580   31656 mustload.go:65] Loading cluster: ha-564251
	I0721 23:56:54.974653   31656 notify.go:220] Checking for updates...
	I0721 23:56:54.974999   31656 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:56:54.975015   31656 status.go:255] checking status of ha-564251 ...
	I0721 23:56:54.975420   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:54.975481   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:54.995367   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38327
	I0721 23:56:54.995822   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:54.996410   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:54.996463   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:54.996826   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:54.997033   31656 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:56:54.998621   31656 status.go:330] ha-564251 host status = "Running" (err=<nil>)
	I0721 23:56:54.998638   31656 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:56:54.998910   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:54.998947   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.013313   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0721 23:56:55.013691   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.014090   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.014109   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.014463   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.014673   31656 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:56:55.017048   31656 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:56:55.017651   31656 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:56:55.017688   31656 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:56:55.017838   31656 host.go:66] Checking if "ha-564251" exists ...
	I0721 23:56:55.018099   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.018130   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.033461   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0721 23:56:55.033956   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.034470   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.034489   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.034828   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.034990   31656 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:56:55.035217   31656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:56:55.035247   31656 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:56:55.038009   31656 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:56:55.038401   31656 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:56:55.038415   31656 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:56:55.038596   31656 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:56:55.038800   31656 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:56:55.038931   31656 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:56:55.039169   31656 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:56:55.123050   31656 ssh_runner.go:195] Run: systemctl --version
	I0721 23:56:55.129769   31656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:56:55.145057   31656 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:56:55.145084   31656 api_server.go:166] Checking apiserver status ...
	I0721 23:56:55.145124   31656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:56:55.160053   31656 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4954/cgroup
	W0721 23:56:55.168648   31656 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:56:55.168697   31656 ssh_runner.go:195] Run: ls
	I0721 23:56:55.172688   31656 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:56:55.178400   31656 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:56:55.178421   31656 status.go:422] ha-564251 apiserver status = Running (err=<nil>)
	I0721 23:56:55.178432   31656 status.go:257] ha-564251 status: &{Name:ha-564251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:56:55.178450   31656 status.go:255] checking status of ha-564251-m02 ...
	I0721 23:56:55.178764   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.178803   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.194997   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0721 23:56:55.195392   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.195864   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.195886   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.196190   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.196396   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetState
	I0721 23:56:55.197934   31656 status.go:330] ha-564251-m02 host status = "Running" (err=<nil>)
	I0721 23:56:55.197951   31656 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:56:55.198218   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.198252   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.213002   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35807
	I0721 23:56:55.213396   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.213905   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.213933   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.214240   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.214453   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetIP
	I0721 23:56:55.217391   31656 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:56:55.217840   31656 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:52:22 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:56:55.217863   31656 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:56:55.217992   31656 host.go:66] Checking if "ha-564251-m02" exists ...
	I0721 23:56:55.218259   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.218297   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.232675   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0721 23:56:55.232998   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.233431   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.233450   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.233813   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.234006   31656 main.go:141] libmachine: (ha-564251-m02) Calling .DriverName
	I0721 23:56:55.234204   31656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:56:55.234224   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHHostname
	I0721 23:56:55.236982   31656 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:56:55.237379   31656 main.go:141] libmachine: (ha-564251-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:f8:82", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:52:22 +0000 UTC Type:0 Mac:52:54:00:38:f8:82 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-564251-m02 Clientid:01:52:54:00:38:f8:82}
	I0721 23:56:55.237402   31656 main.go:141] libmachine: (ha-564251-m02) DBG | domain ha-564251-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:38:f8:82 in network mk-ha-564251
	I0721 23:56:55.237536   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHPort
	I0721 23:56:55.237711   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHKeyPath
	I0721 23:56:55.237837   31656 main.go:141] libmachine: (ha-564251-m02) Calling .GetSSHUsername
	I0721 23:56:55.237955   31656 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m02/id_rsa Username:docker}
	I0721 23:56:55.315586   31656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:56:55.332359   31656 kubeconfig.go:125] found "ha-564251" server: "https://192.168.39.254:8443"
	I0721 23:56:55.332384   31656 api_server.go:166] Checking apiserver status ...
	I0721 23:56:55.332419   31656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:56:55.346208   31656 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	W0721 23:56:55.354871   31656 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 23:56:55.354941   31656 ssh_runner.go:195] Run: ls
	I0721 23:56:55.359429   31656 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0721 23:56:55.363825   31656 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0721 23:56:55.363843   31656 status.go:422] ha-564251-m02 apiserver status = Running (err=<nil>)
	I0721 23:56:55.363854   31656 status.go:257] ha-564251-m02 status: &{Name:ha-564251-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 23:56:55.363868   31656 status.go:255] checking status of ha-564251-m04 ...
	I0721 23:56:55.364130   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.364159   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.378699   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0721 23:56:55.379115   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.379603   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.379626   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.379908   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.380105   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetState
	I0721 23:56:55.381639   31656 status.go:330] ha-564251-m04 host status = "Running" (err=<nil>)
	I0721 23:56:55.381653   31656 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:56:55.381995   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.382049   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.396193   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41167
	I0721 23:56:55.396652   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.397136   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.397154   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.397482   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.397698   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetIP
	I0721 23:56:55.400427   31656 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:56:55.400720   31656 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:54:22 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:56:55.400742   31656 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:56:55.400876   31656 host.go:66] Checking if "ha-564251-m04" exists ...
	I0721 23:56:55.401134   31656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:56:55.401167   31656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:56:55.415026   31656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0721 23:56:55.415384   31656 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:56:55.415816   31656 main.go:141] libmachine: Using API Version  1
	I0721 23:56:55.415833   31656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:56:55.416150   31656 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:56:55.416347   31656 main.go:141] libmachine: (ha-564251-m04) Calling .DriverName
	I0721 23:56:55.416540   31656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 23:56:55.416558   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHHostname
	I0721 23:56:55.418870   31656 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:56:55.419263   31656 main.go:141] libmachine: (ha-564251-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:d8:24", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:54:22 +0000 UTC Type:0 Mac:52:54:00:0e:d8:24 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-564251-m04 Clientid:01:52:54:00:0e:d8:24}
	I0721 23:56:55.419281   31656 main.go:141] libmachine: (ha-564251-m04) DBG | domain ha-564251-m04 has defined IP address 192.168.39.226 and MAC address 52:54:00:0e:d8:24 in network mk-ha-564251
	I0721 23:56:55.419461   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHPort
	I0721 23:56:55.419628   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHKeyPath
	I0721 23:56:55.419754   31656 main.go:141] libmachine: (ha-564251-m04) Calling .GetSSHUsername
	I0721 23:56:55.419917   31656 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251-m04/id_rsa Username:docker}
	W0721 23:57:13.758810   31656 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.226:22: connect: no route to host
	W0721 23:57:13.758903   31656 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0721 23:57:13.758920   31656 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	I0721 23:57:13.758929   31656 status.go:257] ha-564251-m04 status: &{Name:ha-564251-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0721 23:57:13.758948   31656 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-564251 -n ha-564251
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-564251 logs -n 25: (1.550238416s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m04 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp testdata/cp-test.txt                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251:/home/docker/cp-test_ha-564251-m04_ha-564251.txt                       |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251 sudo cat                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251.txt                                 |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m02:/home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m02 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m03:/home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n                                                                 | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | ha-564251-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-564251 ssh -n ha-564251-m03 sudo cat                                          | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | /home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-564251 node stop m02 -v=7                                                     | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-564251 node start m02 -v=7                                                    | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-564251 -v=7                                                           | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-564251 -v=7                                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-564251 --wait=true -v=7                                                    | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:50 UTC | 21 Jul 24 23:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-564251                                                                | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:54 UTC |                     |
	| node    | ha-564251 node delete m03 -v=7                                                   | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:54 UTC | 21 Jul 24 23:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-564251 stop -v=7                                                              | ha-564251 | jenkins | v1.33.1 | 21 Jul 24 23:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:50:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:50:38.927786   29454 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:50:38.927920   29454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:50:38.927932   29454 out.go:304] Setting ErrFile to fd 2...
	I0721 23:50:38.927938   29454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:50:38.928194   29454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:50:38.928955   29454 out.go:298] Setting JSON to false
	I0721 23:50:38.930225   29454 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1983,"bootTime":1721603856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:50:38.930320   29454 start.go:139] virtualization: kvm guest
	I0721 23:50:38.932561   29454 out.go:177] * [ha-564251] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:50:38.933858   29454 notify.go:220] Checking for updates...
	I0721 23:50:38.933880   29454 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:50:38.935112   29454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:50:38.936300   29454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:50:38.937451   29454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:50:38.938566   29454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:50:38.939834   29454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:50:38.941529   29454 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:50:38.941673   29454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:50:38.942294   29454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:50:38.942344   29454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:50:38.957011   29454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0721 23:50:38.957455   29454 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:50:38.957985   29454 main.go:141] libmachine: Using API Version  1
	I0721 23:50:38.958006   29454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:50:38.958480   29454 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:50:38.958767   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:38.995750   29454 out.go:177] * Using the kvm2 driver based on existing profile
	I0721 23:50:38.997139   29454 start.go:297] selected driver: kvm2
	I0721 23:50:38.997157   29454 start.go:901] validating driver "kvm2" against &{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:50:38.997370   29454 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:50:38.997828   29454 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:50:38.997930   29454 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:50:39.012573   29454 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:50:39.013286   29454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:50:39.013367   29454 cni.go:84] Creating CNI manager for ""
	I0721 23:50:39.013381   29454 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 23:50:39.013455   29454 start.go:340] cluster config:
	{Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:50:39.013624   29454 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:50:39.015529   29454 out.go:177] * Starting "ha-564251" primary control-plane node in "ha-564251" cluster
	I0721 23:50:39.016643   29454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:50:39.016677   29454 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:50:39.016686   29454 cache.go:56] Caching tarball of preloaded images
	I0721 23:50:39.016753   29454 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0721 23:50:39.016763   29454 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0721 23:50:39.016910   29454 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/config.json ...
	I0721 23:50:39.017096   29454 start.go:360] acquireMachinesLock for ha-564251: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:50:39.017143   29454 start.go:364] duration metric: took 23.119µs to acquireMachinesLock for "ha-564251"
	I0721 23:50:39.017159   29454 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:50:39.017167   29454 fix.go:54] fixHost starting: 
	I0721 23:50:39.017436   29454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:50:39.017465   29454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:50:39.031101   29454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0721 23:50:39.031508   29454 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:50:39.031925   29454 main.go:141] libmachine: Using API Version  1
	I0721 23:50:39.031939   29454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:50:39.032258   29454 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:50:39.032462   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:39.032630   29454 main.go:141] libmachine: (ha-564251) Calling .GetState
	I0721 23:50:39.034087   29454 fix.go:112] recreateIfNeeded on ha-564251: state=Running err=<nil>
	W0721 23:50:39.034107   29454 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:50:39.036757   29454 out.go:177] * Updating the running kvm2 "ha-564251" VM ...
	I0721 23:50:39.038071   29454 machine.go:94] provisionDockerMachine start ...
	I0721 23:50:39.038089   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:50:39.038287   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.040759   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.041153   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.041197   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.041337   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.041517   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.041663   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.041783   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.041917   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.042079   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.042088   29454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:50:39.143281   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:50:39.143312   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.143584   29454 buildroot.go:166] provisioning hostname "ha-564251"
	I0721 23:50:39.143610   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.143818   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.146563   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.147011   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.147034   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.147147   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.147327   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.147482   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.147699   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.147887   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.148096   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.148115   29454 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-564251 && echo "ha-564251" | sudo tee /etc/hostname
	I0721 23:50:39.260224   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-564251
	
	I0721 23:50:39.260293   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.263280   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.263740   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.263768   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.263892   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.264058   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.264218   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.264338   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.264517   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.264675   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.264689   29454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-564251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-564251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-564251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:50:39.359209   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:39.359239   29454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0721 23:50:39.359255   29454 buildroot.go:174] setting up certificates
	I0721 23:50:39.359262   29454 provision.go:84] configureAuth start
	I0721 23:50:39.359272   29454 main.go:141] libmachine: (ha-564251) Calling .GetMachineName
	I0721 23:50:39.359510   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:50:39.361909   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.362240   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.362268   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.362379   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.364761   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.365197   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.365219   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.365408   29454 provision.go:143] copyHostCerts
	I0721 23:50:39.365451   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:50:39.365500   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0721 23:50:39.365513   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0721 23:50:39.365594   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0721 23:50:39.365718   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:50:39.365745   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0721 23:50:39.365754   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0721 23:50:39.365799   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0721 23:50:39.365872   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:50:39.365894   29454 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0721 23:50:39.365900   29454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0721 23:50:39.365936   29454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0721 23:50:39.366016   29454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.ha-564251 san=[127.0.0.1 192.168.39.91 ha-564251 localhost minikube]
	I0721 23:50:39.434031   29454 provision.go:177] copyRemoteCerts
	I0721 23:50:39.434097   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:50:39.434119   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.436867   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.437340   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.437359   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.437538   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.437709   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.437873   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.438006   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:50:39.516887   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0721 23:50:39.516967   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0721 23:50:39.540045   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0721 23:50:39.540123   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0721 23:50:39.562573   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0721 23:50:39.562706   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:50:39.585694   29454 provision.go:87] duration metric: took 226.419199ms to configureAuth
	I0721 23:50:39.585727   29454 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:50:39.586022   29454 config.go:182] Loaded profile config "ha-564251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:50:39.586125   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:50:39.588607   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.589054   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:50:39.589094   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:50:39.589249   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:50:39.589430   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.589564   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:50:39.589723   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:50:39.589848   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:39.590007   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:50:39.590023   29454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0721 23:52:10.308433   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0721 23:52:10.308467   29454 machine.go:97] duration metric: took 1m31.270382022s to provisionDockerMachine
	I0721 23:52:10.308484   29454 start.go:293] postStartSetup for "ha-564251" (driver="kvm2")
	I0721 23:52:10.308499   29454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:52:10.308533   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.308974   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:52:10.309004   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.311870   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.312338   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.312367   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.312457   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.312631   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.312781   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.312929   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.394011   29454 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:52:10.397975   29454 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:52:10.398007   29454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0721 23:52:10.398081   29454 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0721 23:52:10.398184   29454 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0721 23:52:10.398197   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0721 23:52:10.398279   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0721 23:52:10.407477   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:52:10.429728   29454 start.go:296] duration metric: took 121.230654ms for postStartSetup
	I0721 23:52:10.429766   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.430047   29454 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0721 23:52:10.430078   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.432773   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.433170   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.433194   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.433455   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.433634   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.433793   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.433986   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	W0721 23:52:10.512818   29454 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0721 23:52:10.512851   29454 fix.go:56] duration metric: took 1m31.495683363s for fixHost
	I0721 23:52:10.512876   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.515634   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.516025   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.516047   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.516212   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.516456   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.516602   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.516748   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.516928   29454 main.go:141] libmachine: Using SSH client type: native
	I0721 23:52:10.517088   29454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0721 23:52:10.517099   29454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:52:10.615017   29454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605930.576237422
	
	I0721 23:52:10.615038   29454 fix.go:216] guest clock: 1721605930.576237422
	I0721 23:52:10.615048   29454 fix.go:229] Guest: 2024-07-21 23:52:10.576237422 +0000 UTC Remote: 2024-07-21 23:52:10.512858408 +0000 UTC m=+91.621596507 (delta=63.379014ms)
	I0721 23:52:10.615090   29454 fix.go:200] guest clock delta is within tolerance: 63.379014ms
	I0721 23:52:10.615098   29454 start.go:83] releasing machines lock for "ha-564251", held for 1m31.597943082s
	I0721 23:52:10.615129   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.615404   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:52:10.617866   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.618197   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.618223   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.618380   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.618900   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.619070   29454 main.go:141] libmachine: (ha-564251) Calling .DriverName
	I0721 23:52:10.619171   29454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0721 23:52:10.619236   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.619261   29454 ssh_runner.go:195] Run: cat /version.json
	I0721 23:52:10.619285   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHHostname
	I0721 23:52:10.621835   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622018   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622227   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.622251   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622395   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.622488   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:10.622523   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:10.622526   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.622711   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHPort
	I0721 23:52:10.622744   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.622904   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHKeyPath
	I0721 23:52:10.622898   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.623062   29454 main.go:141] libmachine: (ha-564251) Calling .GetSSHUsername
	I0721 23:52:10.623208   29454 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/ha-564251/id_rsa Username:docker}
	I0721 23:52:10.695361   29454 ssh_runner.go:195] Run: systemctl --version
	I0721 23:52:10.722582   29454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0721 23:52:10.885900   29454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:52:10.891576   29454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:52:10.891669   29454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:52:10.900250   29454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:52:10.900270   29454 start.go:495] detecting cgroup driver to use...
	I0721 23:52:10.900345   29454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:52:10.915907   29454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:52:10.929949   29454 docker.go:217] disabling cri-docker service (if available) ...
	I0721 23:52:10.930013   29454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0721 23:52:10.943291   29454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0721 23:52:10.956467   29454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0721 23:52:11.106012   29454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0721 23:52:11.257298   29454 docker.go:233] disabling docker service ...
	I0721 23:52:11.257368   29454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0721 23:52:11.272246   29454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0721 23:52:11.284868   29454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0721 23:52:11.428586   29454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0721 23:52:11.569523   29454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0721 23:52:11.582551   29454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:52:11.600786   29454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0721 23:52:11.600840   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.610507   29454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0721 23:52:11.610563   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.619982   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.629360   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.638694   29454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:52:11.648202   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.657359   29454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.667805   29454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0721 23:52:11.677216   29454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:52:11.685892   29454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:52:11.694393   29454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:52:11.834530   29454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0721 23:52:12.090295   29454 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0721 23:52:12.090357   29454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0721 23:52:12.096587   29454 start.go:563] Will wait 60s for crictl version
	I0721 23:52:12.096635   29454 ssh_runner.go:195] Run: which crictl
	I0721 23:52:12.100166   29454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:52:12.133969   29454 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0721 23:52:12.134042   29454 ssh_runner.go:195] Run: crio --version
	I0721 23:52:12.161370   29454 ssh_runner.go:195] Run: crio --version
	I0721 23:52:12.190068   29454 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0721 23:52:12.191187   29454 main.go:141] libmachine: (ha-564251) Calling .GetIP
	I0721 23:52:12.193888   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:12.194234   29454 main.go:141] libmachine: (ha-564251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:9e:c7", ip: ""} in network mk-ha-564251: {Iface:virbr1 ExpiryTime:2024-07-22 00:40:54 +0000 UTC Type:0 Mac:52:54:00:92:9e:c7 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-564251 Clientid:01:52:54:00:92:9e:c7}
	I0721 23:52:12.194260   29454 main.go:141] libmachine: (ha-564251) DBG | domain ha-564251 has defined IP address 192.168.39.91 and MAC address 52:54:00:92:9e:c7 in network mk-ha-564251
	I0721 23:52:12.194460   29454 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0721 23:52:12.199314   29454 kubeadm.go:883] updating cluster {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:52:12.199448   29454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:52:12.199487   29454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:52:12.240466   29454 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:52:12.240488   29454 crio.go:433] Images already preloaded, skipping extraction
	I0721 23:52:12.240541   29454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0721 23:52:12.275341   29454 crio.go:514] all images are preloaded for cri-o runtime.
	I0721 23:52:12.275366   29454 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:52:12.275376   29454 kubeadm.go:934] updating node { 192.168.39.91 8443 v1.30.3 crio true true} ...
	I0721 23:52:12.275517   29454 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-564251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:52:12.275615   29454 ssh_runner.go:195] Run: crio config
	I0721 23:52:12.319962   29454 cni.go:84] Creating CNI manager for ""
	I0721 23:52:12.319982   29454 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0721 23:52:12.319993   29454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:52:12.320017   29454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-564251 NodeName:ha-564251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:52:12.320138   29454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-564251"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:52:12.320159   29454 kube-vip.go:115] generating kube-vip config ...
	I0721 23:52:12.320202   29454 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0721 23:52:12.331323   29454 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0721 23:52:12.331438   29454 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0721 23:52:12.331544   29454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:52:12.340623   29454 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:52:12.340681   29454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0721 23:52:12.349409   29454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0721 23:52:12.364290   29454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:52:12.379049   29454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0721 23:52:12.394221   29454 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0721 23:52:12.411340   29454 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0721 23:52:12.414834   29454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:52:12.561811   29454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:52:12.575683   29454 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251 for IP: 192.168.39.91
	I0721 23:52:12.575706   29454 certs.go:194] generating shared ca certs ...
	I0721 23:52:12.575723   29454 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.575856   29454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0721 23:52:12.575896   29454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0721 23:52:12.575905   29454 certs.go:256] generating profile certs ...
	I0721 23:52:12.575982   29454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/client.key
	I0721 23:52:12.576008   29454 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579
	I0721 23:52:12.576024   29454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91 192.168.39.202 192.168.39.89 192.168.39.254]
	I0721 23:52:12.630221   29454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 ...
	I0721 23:52:12.630251   29454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579: {Name:mkc6e7a1da999f35092b2f3a848bc5ca259ba541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.630422   29454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579 ...
	I0721 23:52:12.630433   29454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579: {Name:mk9f2b972ea584c33e0797517e5cb49f297bf5d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:52:12.630496   29454 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt.1f51e579 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt
	I0721 23:52:12.630687   29454 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key.1f51e579 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key
	I0721 23:52:12.630836   29454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key
	I0721 23:52:12.630850   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0721 23:52:12.630864   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0721 23:52:12.630876   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0721 23:52:12.630889   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0721 23:52:12.630900   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0721 23:52:12.630920   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0721 23:52:12.630934   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0721 23:52:12.630945   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0721 23:52:12.630997   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0721 23:52:12.631026   29454 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0721 23:52:12.631035   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0721 23:52:12.631054   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0721 23:52:12.631077   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0721 23:52:12.631099   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0721 23:52:12.631140   29454 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0721 23:52:12.631172   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.631185   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.631196   29454 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.631756   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:52:12.655133   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:52:12.676253   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:52:12.697396   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0721 23:52:12.721543   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0721 23:52:12.748027   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0721 23:52:12.771436   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:52:12.794513   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/ha-564251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:52:12.817623   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:52:12.841967   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0721 23:52:12.864821   29454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0721 23:52:12.887780   29454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:52:12.904930   29454 ssh_runner.go:195] Run: openssl version
	I0721 23:52:12.910424   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0721 23:52:12.921655   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.925908   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.925958   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0721 23:52:12.931321   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0721 23:52:12.941183   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0721 23:52:12.951339   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.955446   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.955488   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0721 23:52:12.960792   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0721 23:52:12.971723   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:52:12.983760   29454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.988092   29454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.988135   29454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:52:12.994233   29454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:52:13.005385   29454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:52:13.009859   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0721 23:52:13.015519   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0721 23:52:13.022393   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0721 23:52:13.027778   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0721 23:52:13.033299   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0721 23:52:13.038740   29454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0721 23:52:13.044736   29454 kubeadm.go:392] StartCluster: {Name:ha-564251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-564251 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.226 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:52:13.044834   29454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0721 23:52:13.044876   29454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0721 23:52:13.088687   29454 cri.go:89] found id: "89eb1b99a458a2c828e88677b3a08c634ba9196fb4f1877edd74daf5fc76e5c9"
	I0721 23:52:13.088706   29454 cri.go:89] found id: "56b1813870485276ccb9eb6e72a676270185dc25ccfb41c3370823d1f4ee463e"
	I0721 23:52:13.088712   29454 cri.go:89] found id: "118160f8dc93973f5b5a80cbbf84ece3aa0be9f31f5000979b1fc88a2ac1b77b"
	I0721 23:52:13.088715   29454 cri.go:89] found id: "fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6"
	I0721 23:52:13.088719   29454 cri.go:89] found id: "db39c7c7e0f7c3c180022c9077b610ea8eafc5f03d2bee7dc27dafe1e2406bd0"
	I0721 23:52:13.088723   29454 cri.go:89] found id: "d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091"
	I0721 23:52:13.088727   29454 cri.go:89] found id: "b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5"
	I0721 23:52:13.088731   29454 cri.go:89] found id: "777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5"
	I0721 23:52:13.088734   29454 cri.go:89] found id: "bd2d1274e49866805b6ee3da185d88e7b587d19d55198cdca8d14f63466ee007"
	I0721 23:52:13.088741   29454 cri.go:89] found id: "22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624"
	I0721 23:52:13.088746   29454 cri.go:89] found id: "fb0b898c77f8dcba51562f4bc296a85dcf6c65be232e08cfa2451329e733faed"
	I0721 23:52:13.088764   29454 cri.go:89] found id: "17153bc2e8cea66d565ddd6d01e9c471e33927fc11681caee85b0f1bede1d0d3"
	I0721 23:52:13.088769   29454 cri.go:89] found id: "9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7"
	I0721 23:52:13.088772   29454 cri.go:89] found id: ""
	I0721 23:52:13.088818   29454 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.333773711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606234333747680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28301c08-5136-4ad8-9c58-d97e0cc63718 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.334324820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=579c0b16-11b7-473e-b392-597093b90aa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.334397954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=579c0b16-11b7-473e-b392-597093b90aa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.338819868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=579c0b16-11b7-473e-b392-597093b90aa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.382986011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61d8c04c-5004-4ef2-9ff8-ad2bfaef3f52 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.383081022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61d8c04c-5004-4ef2-9ff8-ad2bfaef3f52 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.384119013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb0aa453-2da3-4f19-a417-82666e0c1cb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.384540873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606234384518415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb0aa453-2da3-4f19-a417-82666e0c1cb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.385090424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2115a57-68c9-4d0c-a904-945a35201feb name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.385171353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2115a57-68c9-4d0c-a904-945a35201feb name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.385677931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2115a57-68c9-4d0c-a904-945a35201feb name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.426511509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7248318-5d6a-4de7-b5cb-d9b912eaf8d9 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.426643619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7248318-5d6a-4de7-b5cb-d9b912eaf8d9 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.427816241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7876c610-605c-49c2-8417-e1af2c4a0a79 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.428386513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606234428362730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7876c610-605c-49c2-8417-e1af2c4a0a79 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.428878248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d09eb7da-776f-4c13-90c7-a09613e09393 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.428945306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d09eb7da-776f-4c13-90c7-a09613e09393 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.429469616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d09eb7da-776f-4c13-90c7-a09613e09393 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.468941240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=759151e5-2fc1-46b7-a05f-4e58551e37d8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.469064201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=759151e5-2fc1-46b7-a05f-4e58551e37d8 name=/runtime.v1.RuntimeService/Version
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.470049620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=451fea82-92ff-400d-a7c4-317099f48072 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.470614159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721606234470546508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=451fea82-92ff-400d-a7c4-317099f48072 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.471022202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebabaaf4-4006-4b0d-8fd3-2abff6e71f76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.471090192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebabaaf4-4006-4b0d-8fd3-2abff6e71f76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 21 23:57:14 ha-564251 crio[3684]: time="2024-07-21 23:57:14.471483691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e796a289f9b2fa5270c7a3b8fdaedbb1e2c7d7c5ff6857acbe442bd279ed525c,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721606019021817241,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721605979031396334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721605979013588334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotations:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab812ef75a5856ea95837958537043b8d5cbf8c1c8ac59d11fe7b1898a896642,PodSandboxId:d0c1dee700092e75853ef153c0857c73f4c0050f95b23ccdd79341ed5a07468a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721605971245533401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annotations:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf,PodSandboxId:b028ef860c1d68ec300ed16aefd8b39ee4e107ba13770d76c42bee88c9302ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721605968015036748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c1992e-23ca-41e0-b046-1b70a6f6f63a,},Annotations:map[string]string{io.kubernetes.container.hash: b513eddd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b882287a09c86a6f5b774e9aa62305468184cba99a022f16f6da77f5224e011,PodSandboxId:988f31d520e5433192a4c8a6d0bbaed242a9c06160cf20b1a7ed68fd4d916070,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721605953771523204,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d9b939d43bfee37ee200502ef531e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df,PodSandboxId:e31ffc81cb2deec490c88dfc5b08f48f4a116771fcb67719e806a030f4dc85f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721605938169647767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e,PodSandboxId:49f01a734201f5e3e00ddfc8ac1ebad79c1207a87b26a19efb4521262c401546,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721605938180127336,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343ecaacc
ece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5,PodSandboxId:9abc6400cbe83a330fe0c2a79addf4b9a2d9d0f5ff0060b5328e4dfc5548a065,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938058796009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kubernetes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52,PodSandboxId:0ece516c622933387722519cfb58e436078402858bbfd4e8d0fac4a8c3881f1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721605938039081670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f,PodSandboxId:9990219a19be206c35693e9953355b554ab87d2bbd74a4b713a2b26a103569ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721605937884231388,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99,PodSandboxId:7eafbd4df408b6083e8b375ee692885b994755fbb403ca5f5bf99804a2b596c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721605937838978855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950
c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146,PodSandboxId:89e61b0cd0f9512256b9bf9113da8da7fdc064d1ed1e7fbbdf566c7671d3bc6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721605937832541801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6d7167cb34330dce81114060b9b279,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: fc094dfb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102,PodSandboxId:dc671cbc7a7cbcfd2078e152730e69532557f2871063a1d9ce9a4f1b00b59432,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721605937751410351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973effc0455eb71d145acfc351605cda,},Anno
tations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3769ca1c0d18914f17b4221337b7551a450cdb097d134329de94eeb5575c11dc,PodSandboxId:4399dac80b57253050b6e94dd23326fbfe8a355c595245b8f16cc4fd27a8e2c5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721605438091344795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tvjh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab5aa04-3324-424b-9a21-ad06a8974d43,},Annota
tions:map[string]string{io.kubernetes.container.hash: d51ece7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6,PodSandboxId:60549b9fc09ba306925298cd6a61a07abc28a0a7416fa131445c10ffe3b4fd98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306950278225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bsbzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58d6f8-de63-49bf-9017-3cac954350d0,},Annotations:map[string]string{io.kuber
netes.container.hash: 456a9396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091,PodSandboxId:3cf5796c9ffab984f289139c9b3834485dfe8c8e8af70a641b3ccf2a6da8d8f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721605306869203635,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f4lqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebae638d-339c-4241-a5b3-ab4c766efc2f,},Annotations:map[string]string{io.kubernetes.container.hash: 4aca5881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5,PodSandboxId:8c7a9ed52b5b4333ec00a682b2b46ef908890c15390dba4d4f5162028286e594,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721605295239678893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz5md,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f109e939-9f9b-4fa8-b844-4c2652615933,},Annotations:map[string]string{io.kubernetes.container.hash: 1357db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5,PodSandboxId:997932c064fbecb29a32fe18c8fb95ffd1e37f45fc9a0efa24f7382a25c3a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721605291575674051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-srpl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faae2035-d506-4dd6-98b6-c3c5f5b53e84,},Annotations:map[string]string{io.kubernetes.container.hash: 81d8d5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624,PodSandboxId:2d4165e9b2df2c6191fd90fbca902b1025abfa9e3ad6b62defa6fa61727f4f10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721605270816096480,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45423657d5113031326950c3d576e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7,PodSandboxId:bc6861a50f8f62541dffa095b02f668c8d6bfc254ead2f05ce9c88e9d7b3b382,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721605270662197546,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-564251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2186505e6a989ef956c0bdc2fc2fdf,},Annotations:map[string]string{io.kubernetes.container.hash: cb39da34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebabaaf4-4006-4b0d-8fd3-2abff6e71f76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e796a289f9b2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   b028ef860c1d6       storage-provisioner
	a6de206bc3bc8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   dc671cbc7a7cb       kube-controller-manager-ha-564251
	898386faaca7f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   89e61b0cd0f95       kube-apiserver-ha-564251
	ab812ef75a585       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   d0c1dee700092       busybox-fc5497c4f-tvjh7
	21a08c9335f49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   b028ef860c1d6       storage-provisioner
	7b882287a09c8       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   988f31d520e54       kube-vip-ha-564251
	e68ac889a48af       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   49f01a734201f       kindnet-jz5md
	a199590ae4534       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   e31ffc81cb2de       kube-proxy-srpl8
	343ecaaccece7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   9abc6400cbe83       coredns-7db6d8ff4d-bsbzk
	a10b7adf0b1d4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   0ece516c62293       coredns-7db6d8ff4d-f4lqn
	38f8d5dac7577       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   9990219a19be2       etcd-ha-564251
	8944ced8f719b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   7eafbd4df408b       kube-scheduler-ha-564251
	7f230e8efe835       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   89e61b0cd0f95       kube-apiserver-ha-564251
	0effec1b1aa8c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   dc671cbc7a7cb       kube-controller-manager-ha-564251
	3769ca1c0d189       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4399dac80b572       busybox-fc5497c4f-tvjh7
	fd88a6f6b66dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   60549b9fc09ba       coredns-7db6d8ff4d-bsbzk
	d708ea287a4e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   3cf5796c9ffab       coredns-7db6d8ff4d-f4lqn
	b2afbf6c4dfa0       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   8c7a9ed52b5b4       kindnet-jz5md
	777c36438bf0f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   997932c064fbe       kube-proxy-srpl8
	22bd5cac142d6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   2d4165e9b2df2       kube-scheduler-ha-564251
	9863a1f5cf334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   bc6861a50f8f6       etcd-ha-564251
	
	
	==> coredns [343ecaaccece726e0dd5b1f0441b99ac4a1dd7eec3a100110f8c57925360c7f5] <==
	Trace[1161392864]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:52:36.683)
	Trace[1161392864]: [10.001414785s] [10.001414785s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1808281943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Jul-2024 23:52:26.746) (total time: 10000ms):
	Trace[1808281943]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (23:52:36.747)
	Trace[1808281943]: [10.000995033s] [10.000995033s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49986->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49986->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a10b7adf0b1d4a2d3810fae5cf0b1f179eb0eb7e40547f84e6c9420dd7377e52] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[113173568]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Jul-2024 23:52:27.174) (total time: 10001ms):
	Trace[113173568]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:52:37.175)
	Trace[113173568]: [10.00151339s] [10.00151339s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:57778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d708ea287a4e12ea0f9e33bd3f2c48ad2514df2810f0fa8fd3f8dc7a9b5ac091] <==
	[INFO] 10.244.1.2:34188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014651s
	[INFO] 10.244.1.2:41501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011577s
	[INFO] 10.244.1.2:34022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084216s
	[INFO] 10.244.2.2:36668 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118928s
	[INFO] 10.244.0.4:60553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129219s
	[INFO] 10.244.0.4:34229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158514s
	[INFO] 10.244.0.4:35099 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013345s
	[INFO] 10.244.1.2:60128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204062s
	[INFO] 10.244.1.2:51220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169537s
	[INFO] 10.244.1.2:50118 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000261213s
	[INFO] 10.244.2.2:42616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012241s
	[INFO] 10.244.2.2:51984 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223089s
	[INFO] 10.244.2.2:60866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100348s
	[INFO] 10.244.0.4:38494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093863s
	[INFO] 10.244.0.4:56964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080856s
	[INFO] 10.244.0.4:37413 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172185s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1909&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1909&timeout=5m38s&timeoutSeconds=338&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fd88a6f6b66dd32b5fcb085673270f6ccc21df6cb1d102894a31ee1fdfdc51c6] <==
	[INFO] 10.244.1.2:47400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171001s
	[INFO] 10.244.1.2:51399 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162839s
	[INFO] 10.244.2.2:46920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139973s
	[INFO] 10.244.2.2:45334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001092856s
	[INFO] 10.244.0.4:53396 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109772s
	[INFO] 10.244.0.4:54634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001652249s
	[INFO] 10.244.0.4:45490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147442s
	[INFO] 10.244.0.4:46915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090743s
	[INFO] 10.244.0.4:60906 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127948s
	[INFO] 10.244.0.4:36593 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118548s
	[INFO] 10.244.1.2:59477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105785s
	[INFO] 10.244.2.2:48044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138738s
	[INFO] 10.244.2.2:48209 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093024s
	[INFO] 10.244.2.2:54967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089783s
	[INFO] 10.244.0.4:47425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088831s
	[INFO] 10.244.1.2:59455 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131678s
	[INFO] 10.244.2.2:60606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089108s
	[INFO] 10.244.0.4:46173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097876s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1909&timeout=8m19s&timeoutSeconds=499&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-564251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_41_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:53:04 +0000   Sun, 21 Jul 2024 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-564251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83877339e2d74557b5e6d75fd0a30c5b
	  System UUID:                83877339-e2d7-4557-b5e6-d75fd0a30c5b
	  Boot ID:                    4d4acbc6-fdf1-4a14-b622-8bad377224dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvjh7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-bsbzk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-f4lqn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-564251                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-jz5md                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-564251             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-564251    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-srpl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-564251             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-564251                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m10s              kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                kubelet          Node ha-564251 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node ha-564251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node ha-564251 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   NodeReady                15m                kubelet          Node ha-564251 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Warning  ContainerGCFailed        5m54s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m                 node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           3m59s              node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	  Normal   RegisteredNode           3m5s               node-controller  Node ha-564251 event: Registered Node ha-564251 in Controller
	
	
	Name:               ha-564251-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:42:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:57:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:55:59 +0000   Sun, 21 Jul 2024 23:55:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:55:59 +0000   Sun, 21 Jul 2024 23:55:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:55:59 +0000   Sun, 21 Jul 2024 23:55:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:55:59 +0000   Sun, 21 Jul 2024 23:55:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-564251-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8db54debc3f459a84145497caff8bc1
	  System UUID:                e8db54de-bc3f-459a-8414-5497caff8bc1
	  Boot ID:                    06f34f0d-9e5a-4914-968f-a7b4b9481516
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2jrmb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-564251-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-99b2q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-564251-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-564251-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8c6vn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-564251-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-564251-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-564251-m02 status is now: NodeNotReady
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node ha-564251-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node ha-564251-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                     node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  RegisteredNode           3m5s                   node-controller  Node ha-564251-m02 event: Registered Node ha-564251-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-564251-m02 status is now: NodeNotReady
	
	
	Name:               ha-564251-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-564251-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-564251
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_21T23_44_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:44:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-564251-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:54:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:55:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:55:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:55:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Jul 2024 23:54:27 +0000   Sun, 21 Jul 2024 23:55:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    ha-564251-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf784ac43fb240a1b428a7ebf8ca34bc
	  System UUID:                cf784ac4-3fb2-40a1-b428-a7ebf8ca34bc
	  Boot ID:                    cafa4aaa-1679-45a7-8af0-acf5d1fb4d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-77tzg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-6mfjp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-lv5zw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-564251-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m                     node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-564251-m04 event: Registered Node ha-564251-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-564251-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-564251-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-564251-m04 has been rebooted, boot id: cafa4aaa-1679-45a7-8af0-acf5d1fb4d0b
	  Normal   NodeReady                2m47s                  kubelet          Node ha-564251-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m20s)   node-controller  Node ha-564251-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul21 23:41] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.053909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055459] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.166215] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.145388] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268301] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.918090] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.419554] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.062251] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.216979] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075586] kauditd_printk_skb: 79 callbacks suppressed
	[ +11.003747] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.099946] kauditd_printk_skb: 34 callbacks suppressed
	[Jul21 23:42] kauditd_printk_skb: 26 callbacks suppressed
	[Jul21 23:52] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +0.150437] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.175334] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.144219] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.262608] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.725195] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +4.987058] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.091585] kauditd_printk_skb: 85 callbacks suppressed
	[ +36.550376] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [38f8d5dac75779a2eb667d72034040989408543547ce98bcc2d50ca70be6333f] <==
	{"level":"info","ts":"2024-07-21T23:53:50.824631Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:53:50.84771Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3a19c1a50e8a825c","to":"168ed4a7c6431682","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-21T23:53:50.847816Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:05.883913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.806023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:54:05.884109Z","caller":"traceutil/trace.go:171","msg":"trace[401042962] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2397; }","duration":"127.055922ms","start":"2024-07-21T23:54:05.757019Z","end":"2024-07-21T23:54:05.884075Z","steps":["trace[401042962] 'count revisions from in-memory index tree'  (duration: 125.872969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:54:40.810856Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.89:38910","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-21T23:54:40.844306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a19c1a50e8a825c switched to configuration voters=(944820537289605620 4186590243275309660)"}
	{"level":"info","ts":"2024-07-21T23:54:40.849359Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"674de9ca81299bdc","local-member-id":"3a19c1a50e8a825c","removed-remote-peer-id":"168ed4a7c6431682","removed-remote-peer-urls":["https://192.168.39.89:2380"]}
	{"level":"info","ts":"2024-07-21T23:54:40.849447Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.849755Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:54:40.849797Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.850047Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:54:40.850075Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.850102Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"3a19c1a50e8a825c","removed-member-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.850134Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-07-21T23:54:40.850351Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.850526Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682","error":"context canceled"}
	{"level":"warn","ts":"2024-07-21T23:54:40.850625Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"168ed4a7c6431682","error":"failed to read 168ed4a7c6431682 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-21T23:54:40.850656Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.85081Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-07-21T23:54:40.850844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:54:40.850862Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:54:40.850875Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3a19c1a50e8a825c","removed-remote-peer-id":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.866712Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3a19c1a50e8a825c","remote-peer-id-stream-handler":"3a19c1a50e8a825c","remote-peer-id-from":"168ed4a7c6431682"}
	{"level":"warn","ts":"2024-07-21T23:54:40.874784Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.89:46190","server-name":"","error":"read tcp 192.168.39.91:2380->192.168.39.89:46190: read: connection reset by peer"}
	
	
	==> etcd [9863a1f5cf334b2648d5bfb3c8ee1f5ac08edd5de4509a05bd5e6a892757b1b7] <==
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-21T23:50:39.707492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.616579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-21T23:50:39.707541Z","caller":"traceutil/trace.go:171","msg":"trace[242247877] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"165.695098ms","start":"2024-07-21T23:50:39.541814Z","end":"2024-07-21T23:50:39.707509Z","steps":["trace[242247877] 'agreement among raft nodes before linearized reading'  (duration: 165.637008ms)"],"step_count":1}
	2024/07/21 23:50:39 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-21T23:50:39.846465Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.91:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-21T23:50:39.846661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.91:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-21T23:50:39.846792Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3a19c1a50e8a825c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-21T23:50:39.847018Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847065Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.84709Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847196Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847314Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847466Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847509Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d1cad45d5a401f4"}
	{"level":"info","ts":"2024-07-21T23:50:39.847625Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847668Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.847923Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848087Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3a19c1a50e8a825c","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.848101Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"168ed4a7c6431682"}
	{"level":"info","ts":"2024-07-21T23:50:39.851698Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-07-21T23:50:39.851844Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.91:2380"}
	{"level":"info","ts":"2024-07-21T23:50:39.851876Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-564251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.91:2380"],"advertise-client-urls":["https://192.168.39.91:2379"]}
	
	
	==> kernel <==
	 23:57:15 up 16 min,  0 users,  load average: 0.28, 0.41, 0.29
	Linux ha-564251 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b2afbf6c4dfa02880208f9cb48d9db767fe41df640657b5b4e7f8b7e7a2991f5] <==
	I0721 23:50:16.151634       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:16.151641       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:50:16.151860       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:16.151883       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:16.151948       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:16.151968       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:50:26.157449       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:50:26.157491       1 main.go:299] handling current node
	I0721 23:50:26.157504       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:26.157509       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:50:26.157728       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:26.157750       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:26.157811       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:26.157826       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	E0721 23:50:28.864826       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1890&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0721 23:50:36.151019       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0721 23:50:36.151076       1 main.go:322] Node ha-564251-m03 has CIDR [10.244.2.0/24] 
	I0721 23:50:36.151228       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:50:36.151248       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:50:36.151328       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:50:36.151347       1 main.go:299] handling current node
	I0721 23:50:36.151359       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:50:36.151364       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	W0721 23:50:38.688200       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0721 23:50:38.688651       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kindnet [e68ac889a48af082d92c6555d8cffdf3fd23b5bdaafda00a74d0fb50b6d8a68e] <==
	I0721 23:56:29.164607       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:56:39.161679       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:56:39.161796       1 main.go:299] handling current node
	I0721 23:56:39.161825       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:56:39.161843       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:56:39.161989       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:56:39.162009       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:56:49.161200       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:56:49.161326       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:56:49.161519       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:56:49.161625       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:56:49.161754       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:56:49.161789       1 main.go:299] handling current node
	I0721 23:56:59.162685       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:56:59.162745       1 main.go:299] handling current node
	I0721 23:56:59.162761       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:56:59.162766       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:56:59.162904       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:56:59.162924       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	I0721 23:57:09.162243       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0721 23:57:09.162442       1 main.go:299] handling current node
	I0721 23:57:09.162483       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0721 23:57:09.162504       1 main.go:322] Node ha-564251-m02 has CIDR [10.244.1.0/24] 
	I0721 23:57:09.162708       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0721 23:57:09.162740       1 main.go:322] Node ha-564251-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7f230e8efe8352dafa2cbb9551fd38e7c8911ad9f1bb8704a596ba6c8674c146] <==
	I0721 23:52:18.649439       1 options.go:221] external host was not specified, using 192.168.39.91
	I0721 23:52:18.666065       1 server.go:148] Version: v1.30.3
	I0721 23:52:18.666110       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:52:18.996220       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0721 23:52:19.004685       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0721 23:52:19.011462       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0721 23:52:19.011501       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0721 23:52:19.013787       1 instance.go:299] Using reconciler: lease
	W0721 23:52:38.993828       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0721 23:52:38.995843       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0721 23:52:39.014645       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [898386faaca7f05a0490f64756a558e7b7f768ea8f9651298a0a5628030a426a] <==
	I0721 23:53:01.129364       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0721 23:53:01.093532       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0721 23:53:01.191292       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0721 23:53:01.191323       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0721 23:53:01.194012       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0721 23:53:01.194206       1 shared_informer.go:320] Caches are synced for configmaps
	I0721 23:53:01.201817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0721 23:53:01.201973       1 aggregator.go:165] initial CRD sync complete...
	I0721 23:53:01.202071       1 autoregister_controller.go:141] Starting autoregister controller
	I0721 23:53:01.202096       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0721 23:53:01.202119       1 cache.go:39] Caches are synced for autoregister controller
	W0721 23:53:01.206738       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.89]
	I0721 23:53:01.248825       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0721 23:53:01.248971       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0721 23:53:01.249013       1 policy_source.go:224] refreshing policies
	I0721 23:53:01.289715       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0721 23:53:01.294214       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0721 23:53:01.295242       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0721 23:53:01.298162       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0721 23:53:01.308743       1 controller.go:615] quota admission added evaluator for: endpoints
	I0721 23:53:01.316447       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0721 23:53:01.320652       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0721 23:53:02.100131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0721 23:53:02.543027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.89 192.168.39.91]
	W0721 23:53:12.541409       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.91]
	
	
	==> kube-controller-manager [0effec1b1aa8cef7689b9169564fb22ddc8c5d848a31f6d773a53b7a75abc102] <==
	I0721 23:52:19.303727       1 serving.go:380] Generated self-signed cert in-memory
	I0721 23:52:19.971008       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0721 23:52:19.971121       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:52:19.972595       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0721 23:52:19.972711       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0721 23:52:19.972711       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0721 23:52:19.972857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0721 23:52:40.021466       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.91:8443/healthz\": dial tcp 192.168.39.91:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a6de206bc3bc85178f3166f6997747b6480e7c4959937a0c4e2bf05120058788] <==
	I0721 23:55:25.436314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.759748ms"
	I0721 23:55:25.437768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.856µs"
	I0721 23:55:29.593725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.330572ms"
	I0721 23:55:29.594386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.696µs"
	E0721 23:55:33.940946       1 gc_controller.go:153] "Failed to get node" err="node \"ha-564251-m03\" not found" logger="pod-garbage-collector-controller" node="ha-564251-m03"
	E0721 23:55:33.941058       1 gc_controller.go:153] "Failed to get node" err="node \"ha-564251-m03\" not found" logger="pod-garbage-collector-controller" node="ha-564251-m03"
	E0721 23:55:33.941086       1 gc_controller.go:153] "Failed to get node" err="node \"ha-564251-m03\" not found" logger="pod-garbage-collector-controller" node="ha-564251-m03"
	E0721 23:55:33.941125       1 gc_controller.go:153] "Failed to get node" err="node \"ha-564251-m03\" not found" logger="pod-garbage-collector-controller" node="ha-564251-m03"
	E0721 23:55:33.941154       1 gc_controller.go:153] "Failed to get node" err="node \"ha-564251-m03\" not found" logger="pod-garbage-collector-controller" node="ha-564251-m03"
	I0721 23:55:33.952122       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-564251-m03"
	I0721 23:55:33.978923       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-564251-m03"
	I0721 23:55:33.979015       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-s2t8k"
	I0721 23:55:34.011890       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-s2t8k"
	I0721 23:55:34.011930       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-564251-m03"
	I0721 23:55:34.043507       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-564251-m03"
	I0721 23:55:34.043623       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-564251-m03"
	I0721 23:55:34.066186       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-564251-m03"
	I0721 23:55:34.066321       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-564251-m03"
	I0721 23:55:34.087662       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-564251-m03"
	I0721 23:55:34.087870       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2xlks"
	I0721 23:55:34.109270       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2xlks"
	I0721 23:55:34.109397       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-564251-m03"
	I0721 23:55:34.141383       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-564251-m03"
	I0721 23:56:03.075032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.994025ms"
	I0721 23:56:03.075615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.331µs"
	
	
	==> kube-proxy [777c36438bf0fd182c24ffad47b5fc40053e0a4199bc08e6d3c189061b5a0df5] <==
	E0721 23:49:22.878781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:25.950097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:25.950359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.414102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.414241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.415431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.415481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:32.415784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:32.415819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:41.631709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:41.631973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:44.702879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:44.703015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:49:44.702957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:49:44.703089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:03.134753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:03.134828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-564251&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:03.134909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:03.134939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1891": dial tcp 192.168.39.254:8443: connect: no route to host
	W0721 23:50:09.278210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0721 23:50:09.278379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a199590ae4534b2ebc2fc7c2c569deccf5173c968b71c4e47450cbdef61865df] <==
	I0721 23:52:19.581538       1 server_linux.go:69] "Using iptables proxy"
	E0721 23:52:21.375416       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:24.447023       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:27.518325       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:33.662921       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0721 23:52:45.950492       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-564251\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0721 23:53:03.990913       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.91"]
	I0721 23:53:04.027284       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:53:04.027389       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:53:04.027429       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:53:04.029770       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:53:04.030001       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:53:04.030025       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:53:04.031402       1 config.go:192] "Starting service config controller"
	I0721 23:53:04.031448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:53:04.031470       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:53:04.031486       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:53:04.033331       1 config.go:319] "Starting node config controller"
	I0721 23:53:04.033362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:53:04.131922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0721 23:53:04.132000       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:53:04.133451       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22bd5cac142d60e80aad43c91097a4dcce18202bd09acf95e3ac03411d4a8624] <==
	W0721 23:50:34.443283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:34.443353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0721 23:50:34.597881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:50:34.597930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:50:36.296529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0721 23:50:36.296619       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0721 23:50:37.134957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0721 23:50:37.135068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0721 23:50:37.449203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0721 23:50:37.449279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0721 23:50:37.656813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0721 23:50:37.656897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0721 23:50:37.820460       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:50:37.820538       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0721 23:50:38.041209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0721 23:50:38.041304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0721 23:50:38.127874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:38.127988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:38.209687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:38.209802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:39.054425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0721 23:50:39.054457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0721 23:50:39.376172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:39.376249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:50:39.693013       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8944ced8f719b1577d7ac466116cff8fa5a16ff9741f36ddc57925d19cb12e99] <==
	W0721 23:52:56.408145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.91:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:56.408206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.91:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.181846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.91:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.181913       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.91:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.385594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.91:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.385731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.91:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:57.689779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:57.689841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:52:58.298244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	E0721 23:52:58.298331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.91:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.91:8443: connect: connection refused
	W0721 23:53:01.178257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0721 23:53:01.178334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0721 23:53:01.180865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0721 23:53:01.181312       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0721 23:53:01.181045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0721 23:53:01.181334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0721 23:53:01.181206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:53:01.181347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:53:01.181545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:53:01.181754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0721 23:53:01.525848       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0721 23:54:37.546095       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-77tzg\": pod busybox-fc5497c4f-77tzg is already assigned to node \"ha-564251-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-77tzg" node="ha-564251-m04"
	E0721 23:54:37.550499       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ee975c3f-7e11-4692-a8a1-c2902ea54e77(default/busybox-fc5497c4f-77tzg) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-77tzg"
	E0721 23:54:37.551149       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-77tzg\": pod busybox-fc5497c4f-77tzg is already assigned to node \"ha-564251-m04\"" pod="default/busybox-fc5497c4f-77tzg"
	I0721 23:54:37.551262       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-77tzg" node="ha-564251-m04"
	
	
	==> kubelet <==
	Jul 21 23:53:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:53:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:53:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:53:26 ha-564251 kubelet[1363]: I0721 23:53:26.002768    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:26 ha-564251 kubelet[1363]: E0721 23:53:26.002984    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75c1992e-23ca-41e0-b046-1b70a6f6f63a)\"" pod="kube-system/storage-provisioner" podUID="75c1992e-23ca-41e0-b046-1b70a6f6f63a"
	Jul 21 23:53:39 ha-564251 kubelet[1363]: I0721 23:53:39.002448    1363 scope.go:117] "RemoveContainer" containerID="21a08c9335f4926ac4a8faeab8ff017029d7b724bb145754e9e4f5088c0d2eaf"
	Jul 21 23:53:42 ha-564251 kubelet[1363]: I0721 23:53:42.106833    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-tvjh7" podStartSLOduration=585.511669388 podStartE2EDuration="9m48.106798691s" podCreationTimestamp="2024-07-21 23:43:54 +0000 UTC" firstStartedPulling="2024-07-21 23:43:55.474164809 +0000 UTC m=+155.595946884" lastFinishedPulling="2024-07-21 23:43:58.069294118 +0000 UTC m=+158.191076187" observedRunningTime="2024-07-21 23:43:58.652518141 +0000 UTC m=+158.774300262" watchObservedRunningTime="2024-07-21 23:53:42.106798691 +0000 UTC m=+742.228580751"
	Jul 21 23:53:53 ha-564251 kubelet[1363]: I0721 23:53:53.002511    1363 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-564251" podUID="e865cc87-be77-43f3-bef2-4c47dbe7ffe5"
	Jul 21 23:53:53 ha-564251 kubelet[1363]: I0721 23:53:53.021981    1363 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-564251"
	Jul 21 23:54:00 ha-564251 kubelet[1363]: I0721 23:54:00.069088    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-564251" podStartSLOduration=7.069049566 podStartE2EDuration="7.069049566s" podCreationTimestamp="2024-07-21 23:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-21 23:54:00.068672158 +0000 UTC m=+760.190454237" watchObservedRunningTime="2024-07-21 23:54:00.069049566 +0000 UTC m=+760.190831640"
	Jul 21 23:54:20 ha-564251 kubelet[1363]: E0721 23:54:20.025098    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:54:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:54:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:55:20 ha-564251 kubelet[1363]: E0721 23:55:20.021132    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:55:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:55:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:55:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:55:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 21 23:56:20 ha-564251 kubelet[1363]: E0721 23:56:20.021152    1363 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 21 23:56:20 ha-564251 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 21 23:56:20 ha-564251 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 21 23:56:20 ha-564251 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 21 23:56:20 ha-564251 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 23:57:14.055108   31816 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-5094/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-564251 -n ha-564251
helpers_test.go:261: (dbg) Run:  kubectl --context ha-564251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-332426
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-332426
E0722 00:12:54.283195   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-332426: exit status 82 (2m1.777282808s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-332426-m03"  ...
	* Stopping node "multinode-332426-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-332426" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-332426 --wait=true -v=8 --alsologtostderr
E0722 00:14:55.172225   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-332426 --wait=true -v=8 --alsologtostderr: (3m20.690209944s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-332426
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-332426 -n multinode-332426
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-332426 logs -n 25: (1.36169321s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426:/home/docker/cp-test_multinode-332426-m02_multinode-332426.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426 sudo cat                                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m02_multinode-332426.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03:/home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426-m03 sudo cat                                   | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp testdata/cp-test.txt                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426:/home/docker/cp-test_multinode-332426-m03_multinode-332426.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426 sudo cat                                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m03_multinode-332426.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02:/home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426-m02 sudo cat                                   | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-332426 node stop m03                                                          | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	| node    | multinode-332426 node start                                                             | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-332426                                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:12 UTC |                     |
	| stop    | -p multinode-332426                                                                     | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:12 UTC |                     |
	| start   | -p multinode-332426                                                                     | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:14 UTC | 22 Jul 24 00:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-332426                                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:14:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:14:30.349404   41236 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:14:30.349527   41236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:14:30.349537   41236 out.go:304] Setting ErrFile to fd 2...
	I0722 00:14:30.349543   41236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:14:30.349732   41236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:14:30.350251   41236 out.go:298] Setting JSON to false
	I0722 00:14:30.351152   41236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3414,"bootTime":1721603856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:14:30.351207   41236 start.go:139] virtualization: kvm guest
	I0722 00:14:30.353371   41236 out.go:177] * [multinode-332426] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:14:30.354528   41236 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:14:30.354531   41236 notify.go:220] Checking for updates...
	I0722 00:14:30.356633   41236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:14:30.357710   41236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:14:30.358689   41236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:14:30.359791   41236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:14:30.360808   41236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:14:30.362406   41236 config.go:182] Loaded profile config "multinode-332426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:14:30.362530   41236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:14:30.362998   41236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:14:30.363050   41236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:14:30.377754   41236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0722 00:14:30.378267   41236 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:14:30.378940   41236 main.go:141] libmachine: Using API Version  1
	I0722 00:14:30.378963   41236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:14:30.379298   41236 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:14:30.379493   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.414565   41236 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:14:30.415753   41236 start.go:297] selected driver: kvm2
	I0722 00:14:30.415774   41236 start.go:901] validating driver "kvm2" against &{Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:14:30.415887   41236 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:14:30.416229   41236 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:14:30.416289   41236 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:14:30.431491   41236 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:14:30.432145   41236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:14:30.432218   41236 cni.go:84] Creating CNI manager for ""
	I0722 00:14:30.432234   41236 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 00:14:30.432301   41236 start.go:340] cluster config:
	{Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:14:30.432428   41236 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:14:30.434171   41236 out.go:177] * Starting "multinode-332426" primary control-plane node in "multinode-332426" cluster
	I0722 00:14:30.435227   41236 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:14:30.435278   41236 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:14:30.435291   41236 cache.go:56] Caching tarball of preloaded images
	I0722 00:14:30.435365   41236 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:14:30.435378   41236 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:14:30.435506   41236 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/config.json ...
	I0722 00:14:30.435867   41236 start.go:360] acquireMachinesLock for multinode-332426: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:14:30.435946   41236 start.go:364] duration metric: took 48.255µs to acquireMachinesLock for "multinode-332426"
	I0722 00:14:30.435965   41236 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:14:30.435976   41236 fix.go:54] fixHost starting: 
	I0722 00:14:30.436226   41236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:14:30.436264   41236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:14:30.450293   41236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0722 00:14:30.450700   41236 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:14:30.451101   41236 main.go:141] libmachine: Using API Version  1
	I0722 00:14:30.451124   41236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:14:30.451496   41236 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:14:30.451713   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.451851   41236 main.go:141] libmachine: (multinode-332426) Calling .GetState
	I0722 00:14:30.453619   41236 fix.go:112] recreateIfNeeded on multinode-332426: state=Running err=<nil>
	W0722 00:14:30.453641   41236 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:14:30.455290   41236 out.go:177] * Updating the running kvm2 "multinode-332426" VM ...
	I0722 00:14:30.456363   41236 machine.go:94] provisionDockerMachine start ...
	I0722 00:14:30.456381   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.456562   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.459373   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.459779   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.459801   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.459910   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.460059   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.460227   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.460372   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.460520   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.460713   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.460723   41236 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:14:30.563966   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-332426
	
	I0722 00:14:30.563991   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.564209   41236 buildroot.go:166] provisioning hostname "multinode-332426"
	I0722 00:14:30.564238   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.564467   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.567096   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.567513   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.567540   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.567700   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.567882   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.568106   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.568252   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.568422   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.568596   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.568613   41236 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-332426 && echo "multinode-332426" | sudo tee /etc/hostname
	I0722 00:14:30.686711   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-332426
	
	I0722 00:14:30.686736   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.689564   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.689974   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.690011   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.690126   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.690329   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.690526   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.690687   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.690865   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.691110   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.691132   41236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-332426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-332426/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-332426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:14:30.791259   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:14:30.791291   41236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:14:30.791313   41236 buildroot.go:174] setting up certificates
	I0722 00:14:30.791324   41236 provision.go:84] configureAuth start
	I0722 00:14:30.791360   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.791622   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:14:30.794191   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.794647   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.794676   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.794823   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.797116   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.797407   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.797439   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.797634   41236 provision.go:143] copyHostCerts
	I0722 00:14:30.797669   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:14:30.797701   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:14:30.797721   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:14:30.797786   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:14:30.797861   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:14:30.797877   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:14:30.797883   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:14:30.797907   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:14:30.797944   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:14:30.797959   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:14:30.797965   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:14:30.797984   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:14:30.798023   41236 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.multinode-332426 san=[127.0.0.1 192.168.39.67 localhost minikube multinode-332426]
	I0722 00:14:30.873166   41236 provision.go:177] copyRemoteCerts
	I0722 00:14:30.873220   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:14:30.873242   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.876170   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.876577   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.876600   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.876770   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.876932   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.877091   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.877255   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:14:30.961184   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 00:14:30.961275   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:14:30.991405   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 00:14:30.991489   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 00:14:31.013536   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 00:14:31.013599   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:14:31.036527   41236 provision.go:87] duration metric: took 245.190005ms to configureAuth
	I0722 00:14:31.036550   41236 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:14:31.036786   41236 config.go:182] Loaded profile config "multinode-332426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:14:31.036866   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:31.039488   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:31.039834   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:31.039862   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:31.039959   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:31.040146   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:31.040305   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:31.040438   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:31.040564   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:31.040722   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:31.040734   41236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:16:01.783677   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:16:01.783703   41236 machine.go:97] duration metric: took 1m31.327328851s to provisionDockerMachine
	I0722 00:16:01.783715   41236 start.go:293] postStartSetup for "multinode-332426" (driver="kvm2")
	I0722 00:16:01.783724   41236 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:16:01.783757   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:01.784043   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:16:01.784139   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:01.787314   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.787744   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:01.787768   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.787966   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:01.788154   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.788315   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:01.788468   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:01.869758   41236 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:16:01.873562   41236 command_runner.go:130] > NAME=Buildroot
	I0722 00:16:01.873584   41236 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 00:16:01.873590   41236 command_runner.go:130] > ID=buildroot
	I0722 00:16:01.873598   41236 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 00:16:01.873605   41236 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 00:16:01.873910   41236 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:16:01.873928   41236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:16:01.873979   41236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:16:01.874042   41236 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:16:01.874052   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0722 00:16:01.874135   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:16:01.883013   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:16:01.904725   41236 start.go:296] duration metric: took 120.995763ms for postStartSetup
	I0722 00:16:01.904768   41236 fix.go:56] duration metric: took 1m31.468793708s for fixHost
	I0722 00:16:01.904788   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:01.907462   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.907810   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:01.907832   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.908038   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:01.908232   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.908411   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.908554   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:01.908734   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:16:01.908911   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:16:01.908920   41236 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:16:02.006917   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721607361.981476151
	
	I0722 00:16:02.006943   41236 fix.go:216] guest clock: 1721607361.981476151
	I0722 00:16:02.006956   41236 fix.go:229] Guest: 2024-07-22 00:16:01.981476151 +0000 UTC Remote: 2024-07-22 00:16:01.904772468 +0000 UTC m=+91.589726844 (delta=76.703683ms)
	I0722 00:16:02.006989   41236 fix.go:200] guest clock delta is within tolerance: 76.703683ms
	I0722 00:16:02.006996   41236 start.go:83] releasing machines lock for "multinode-332426", held for 1m31.57104089s
	I0722 00:16:02.007016   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.007321   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:16:02.009950   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.010401   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.010431   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.010568   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011102   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011272   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011363   41236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:16:02.011410   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:02.011514   41236 ssh_runner.go:195] Run: cat /version.json
	I0722 00:16:02.011543   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:02.013987   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014319   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014358   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.014381   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014554   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:02.014718   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:02.014804   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.014829   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014856   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:02.015001   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:02.015006   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:02.015145   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:02.015304   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:02.015454   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:02.087427   41236 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 00:16:02.087894   41236 ssh_runner.go:195] Run: systemctl --version
	I0722 00:16:02.121986   41236 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0722 00:16:02.122570   41236 command_runner.go:130] > systemd 252 (252)
	I0722 00:16:02.122640   41236 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 00:16:02.122708   41236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:16:02.280490   41236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:16:02.285880   41236 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 00:16:02.286009   41236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:16:02.286064   41236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:16:02.295352   41236 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 00:16:02.295375   41236 start.go:495] detecting cgroup driver to use...
	I0722 00:16:02.295428   41236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:16:02.312826   41236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:16:02.326996   41236 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:16:02.327060   41236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:16:02.340498   41236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:16:02.353907   41236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:16:02.512764   41236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:16:02.660935   41236 docker.go:233] disabling docker service ...
	I0722 00:16:02.661008   41236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:16:02.677837   41236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:16:02.691085   41236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:16:02.822227   41236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:16:02.955461   41236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:16:02.969111   41236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:16:02.986766   41236 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0722 00:16:02.987109   41236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:16:02.987156   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:02.996668   41236 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:16:02.996729   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.006439   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.015691   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.024951   41236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:16:03.034675   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.044467   41236 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.054931   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.064495   41236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:16:03.073668   41236 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 00:16:03.073787   41236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:16:03.083556   41236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:16:03.213843   41236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:16:03.437195   41236 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:16:03.437257   41236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:16:03.441617   41236 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0722 00:16:03.441640   41236 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 00:16:03.441646   41236 command_runner.go:130] > Device: 0,22	Inode: 1364        Links: 1
	I0722 00:16:03.441652   41236 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 00:16:03.441657   41236 command_runner.go:130] > Access: 2024-07-22 00:16:03.371742621 +0000
	I0722 00:16:03.441666   41236 command_runner.go:130] > Modify: 2024-07-22 00:16:03.318741047 +0000
	I0722 00:16:03.441673   41236 command_runner.go:130] > Change: 2024-07-22 00:16:03.318741047 +0000
	I0722 00:16:03.441679   41236 command_runner.go:130] >  Birth: -
	I0722 00:16:03.441694   41236 start.go:563] Will wait 60s for crictl version
	I0722 00:16:03.441750   41236 ssh_runner.go:195] Run: which crictl
	I0722 00:16:03.445208   41236 command_runner.go:130] > /usr/bin/crictl
	I0722 00:16:03.445270   41236 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:16:03.478857   41236 command_runner.go:130] > Version:  0.1.0
	I0722 00:16:03.478880   41236 command_runner.go:130] > RuntimeName:  cri-o
	I0722 00:16:03.478885   41236 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0722 00:16:03.478900   41236 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 00:16:03.480043   41236 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:16:03.480105   41236 ssh_runner.go:195] Run: crio --version
	I0722 00:16:03.507057   41236 command_runner.go:130] > crio version 1.29.1
	I0722 00:16:03.507077   41236 command_runner.go:130] > Version:        1.29.1
	I0722 00:16:03.507085   41236 command_runner.go:130] > GitCommit:      unknown
	I0722 00:16:03.507090   41236 command_runner.go:130] > GitCommitDate:  unknown
	I0722 00:16:03.507095   41236 command_runner.go:130] > GitTreeState:   clean
	I0722 00:16:03.507103   41236 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 00:16:03.507118   41236 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 00:16:03.507124   41236 command_runner.go:130] > Compiler:       gc
	I0722 00:16:03.507130   41236 command_runner.go:130] > Platform:       linux/amd64
	I0722 00:16:03.507135   41236 command_runner.go:130] > Linkmode:       dynamic
	I0722 00:16:03.507142   41236 command_runner.go:130] > BuildTags:      
	I0722 00:16:03.507149   41236 command_runner.go:130] >   containers_image_ostree_stub
	I0722 00:16:03.507157   41236 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 00:16:03.507166   41236 command_runner.go:130] >   btrfs_noversion
	I0722 00:16:03.507174   41236 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 00:16:03.507182   41236 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 00:16:03.507191   41236 command_runner.go:130] >   seccomp
	I0722 00:16:03.507199   41236 command_runner.go:130] > LDFlags:          unknown
	I0722 00:16:03.507206   41236 command_runner.go:130] > SeccompEnabled:   true
	I0722 00:16:03.507214   41236 command_runner.go:130] > AppArmorEnabled:  false
	I0722 00:16:03.507287   41236 ssh_runner.go:195] Run: crio --version
	I0722 00:16:03.532647   41236 command_runner.go:130] > crio version 1.29.1
	I0722 00:16:03.532669   41236 command_runner.go:130] > Version:        1.29.1
	I0722 00:16:03.532675   41236 command_runner.go:130] > GitCommit:      unknown
	I0722 00:16:03.532679   41236 command_runner.go:130] > GitCommitDate:  unknown
	I0722 00:16:03.532683   41236 command_runner.go:130] > GitTreeState:   clean
	I0722 00:16:03.532688   41236 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 00:16:03.532692   41236 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 00:16:03.532696   41236 command_runner.go:130] > Compiler:       gc
	I0722 00:16:03.532701   41236 command_runner.go:130] > Platform:       linux/amd64
	I0722 00:16:03.532705   41236 command_runner.go:130] > Linkmode:       dynamic
	I0722 00:16:03.532709   41236 command_runner.go:130] > BuildTags:      
	I0722 00:16:03.532712   41236 command_runner.go:130] >   containers_image_ostree_stub
	I0722 00:16:03.532717   41236 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 00:16:03.532721   41236 command_runner.go:130] >   btrfs_noversion
	I0722 00:16:03.532726   41236 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 00:16:03.532730   41236 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 00:16:03.532736   41236 command_runner.go:130] >   seccomp
	I0722 00:16:03.532740   41236 command_runner.go:130] > LDFlags:          unknown
	I0722 00:16:03.532745   41236 command_runner.go:130] > SeccompEnabled:   true
	I0722 00:16:03.532748   41236 command_runner.go:130] > AppArmorEnabled:  false
	I0722 00:16:03.535750   41236 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:16:03.536992   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:16:03.539700   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:03.540069   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:03.540096   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:03.540282   41236 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:16:03.544116   41236 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0722 00:16:03.544316   41236 kubeadm.go:883] updating cluster {Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:16:03.544456   41236 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:16:03.544500   41236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:16:03.584309   41236 command_runner.go:130] > {
	I0722 00:16:03.584334   41236 command_runner.go:130] >   "images": [
	I0722 00:16:03.584338   41236 command_runner.go:130] >     {
	I0722 00:16:03.584346   41236 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 00:16:03.584351   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584356   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 00:16:03.584360   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584364   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584378   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 00:16:03.584390   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 00:16:03.584397   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584405   41236 command_runner.go:130] >       "size": "87165492",
	I0722 00:16:03.584413   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584420   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584434   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584440   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584444   41236 command_runner.go:130] >     },
	I0722 00:16:03.584449   41236 command_runner.go:130] >     {
	I0722 00:16:03.584454   41236 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 00:16:03.584459   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584465   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 00:16:03.584472   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584478   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584491   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 00:16:03.584502   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 00:16:03.584511   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584524   41236 command_runner.go:130] >       "size": "87174707",
	I0722 00:16:03.584537   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584553   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584562   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584571   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584590   41236 command_runner.go:130] >     },
	I0722 00:16:03.584596   41236 command_runner.go:130] >     {
	I0722 00:16:03.584606   41236 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 00:16:03.584616   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584624   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 00:16:03.584630   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584635   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584649   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 00:16:03.584665   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 00:16:03.584674   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584681   41236 command_runner.go:130] >       "size": "1363676",
	I0722 00:16:03.584691   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584697   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584706   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584712   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584718   41236 command_runner.go:130] >     },
	I0722 00:16:03.584721   41236 command_runner.go:130] >     {
	I0722 00:16:03.584731   41236 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 00:16:03.584740   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584750   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 00:16:03.584759   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584768   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584782   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 00:16:03.584802   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 00:16:03.584810   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584818   41236 command_runner.go:130] >       "size": "31470524",
	I0722 00:16:03.584828   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584834   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584842   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584849   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584857   41236 command_runner.go:130] >     },
	I0722 00:16:03.584868   41236 command_runner.go:130] >     {
	I0722 00:16:03.584880   41236 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 00:16:03.584886   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584893   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 00:16:03.584898   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584905   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584921   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 00:16:03.584936   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 00:16:03.584944   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584951   41236 command_runner.go:130] >       "size": "61245718",
	I0722 00:16:03.584960   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584968   41236 command_runner.go:130] >       "username": "nonroot",
	I0722 00:16:03.584972   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584979   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584984   41236 command_runner.go:130] >     },
	I0722 00:16:03.584989   41236 command_runner.go:130] >     {
	I0722 00:16:03.585000   41236 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 00:16:03.585009   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585017   41236 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 00:16:03.585026   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585032   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585046   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 00:16:03.585056   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 00:16:03.585062   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585069   41236 command_runner.go:130] >       "size": "150779692",
	I0722 00:16:03.585078   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585085   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585091   41236 command_runner.go:130] >       },
	I0722 00:16:03.585098   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585107   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585113   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585121   41236 command_runner.go:130] >     },
	I0722 00:16:03.585126   41236 command_runner.go:130] >     {
	I0722 00:16:03.585137   41236 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 00:16:03.585144   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585150   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 00:16:03.585166   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585176   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585187   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 00:16:03.585202   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 00:16:03.585210   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585216   41236 command_runner.go:130] >       "size": "117609954",
	I0722 00:16:03.585224   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585228   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585241   41236 command_runner.go:130] >       },
	I0722 00:16:03.585250   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585260   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585268   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585277   41236 command_runner.go:130] >     },
	I0722 00:16:03.585282   41236 command_runner.go:130] >     {
	I0722 00:16:03.585292   41236 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 00:16:03.585301   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585309   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 00:16:03.585315   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585320   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585353   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 00:16:03.585370   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 00:16:03.585376   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585382   41236 command_runner.go:130] >       "size": "112198984",
	I0722 00:16:03.585390   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585397   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585404   41236 command_runner.go:130] >       },
	I0722 00:16:03.585409   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585416   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585423   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585434   41236 command_runner.go:130] >     },
	I0722 00:16:03.585440   41236 command_runner.go:130] >     {
	I0722 00:16:03.585450   41236 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 00:16:03.585456   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585463   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 00:16:03.585468   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585473   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585487   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 00:16:03.585501   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 00:16:03.585508   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585515   41236 command_runner.go:130] >       "size": "85953945",
	I0722 00:16:03.585525   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.585531   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585540   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585550   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585555   41236 command_runner.go:130] >     },
	I0722 00:16:03.585561   41236 command_runner.go:130] >     {
	I0722 00:16:03.585572   41236 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 00:16:03.585586   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585593   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 00:16:03.585602   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585608   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585624   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 00:16:03.585637   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 00:16:03.585645   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585652   41236 command_runner.go:130] >       "size": "63051080",
	I0722 00:16:03.585659   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585666   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585674   41236 command_runner.go:130] >       },
	I0722 00:16:03.585680   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585689   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585695   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585704   41236 command_runner.go:130] >     },
	I0722 00:16:03.585709   41236 command_runner.go:130] >     {
	I0722 00:16:03.585721   41236 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 00:16:03.585731   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585739   41236 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 00:16:03.585747   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585754   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585767   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 00:16:03.585780   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 00:16:03.585788   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585794   41236 command_runner.go:130] >       "size": "750414",
	I0722 00:16:03.585807   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585817   41236 command_runner.go:130] >         "value": "65535"
	I0722 00:16:03.585822   41236 command_runner.go:130] >       },
	I0722 00:16:03.585831   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585838   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585847   41236 command_runner.go:130] >       "pinned": true
	I0722 00:16:03.585852   41236 command_runner.go:130] >     }
	I0722 00:16:03.585859   41236 command_runner.go:130] >   ]
	I0722 00:16:03.585862   41236 command_runner.go:130] > }
	I0722 00:16:03.586072   41236 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:16:03.586084   41236 crio.go:433] Images already preloaded, skipping extraction
	I0722 00:16:03.586138   41236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:16:03.617866   41236 command_runner.go:130] > {
	I0722 00:16:03.617890   41236 command_runner.go:130] >   "images": [
	I0722 00:16:03.617895   41236 command_runner.go:130] >     {
	I0722 00:16:03.617903   41236 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 00:16:03.617908   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.617923   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 00:16:03.617929   41236 command_runner.go:130] >       ],
	I0722 00:16:03.617936   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.617952   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 00:16:03.617965   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 00:16:03.617970   41236 command_runner.go:130] >       ],
	I0722 00:16:03.617974   41236 command_runner.go:130] >       "size": "87165492",
	I0722 00:16:03.617979   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.617988   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.617999   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618003   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618007   41236 command_runner.go:130] >     },
	I0722 00:16:03.618010   41236 command_runner.go:130] >     {
	I0722 00:16:03.618020   41236 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 00:16:03.618029   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618037   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 00:16:03.618046   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618053   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618065   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 00:16:03.618075   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 00:16:03.618082   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618087   41236 command_runner.go:130] >       "size": "87174707",
	I0722 00:16:03.618092   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618101   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618110   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618119   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618132   41236 command_runner.go:130] >     },
	I0722 00:16:03.618139   41236 command_runner.go:130] >     {
	I0722 00:16:03.618150   41236 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 00:16:03.618159   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618169   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 00:16:03.618175   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618179   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618189   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 00:16:03.618204   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 00:16:03.618213   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618220   41236 command_runner.go:130] >       "size": "1363676",
	I0722 00:16:03.618230   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618240   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618252   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618262   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618270   41236 command_runner.go:130] >     },
	I0722 00:16:03.618278   41236 command_runner.go:130] >     {
	I0722 00:16:03.618285   41236 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 00:16:03.618300   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618311   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 00:16:03.618320   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618329   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618344   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 00:16:03.618366   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 00:16:03.618374   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618377   41236 command_runner.go:130] >       "size": "31470524",
	I0722 00:16:03.618386   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618396   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618406   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618414   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618422   41236 command_runner.go:130] >     },
	I0722 00:16:03.618428   41236 command_runner.go:130] >     {
	I0722 00:16:03.618441   41236 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 00:16:03.618456   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618464   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 00:16:03.618471   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618481   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618495   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 00:16:03.618510   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 00:16:03.618526   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618536   41236 command_runner.go:130] >       "size": "61245718",
	I0722 00:16:03.618543   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618547   41236 command_runner.go:130] >       "username": "nonroot",
	I0722 00:16:03.618555   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618565   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618585   41236 command_runner.go:130] >     },
	I0722 00:16:03.618594   41236 command_runner.go:130] >     {
	I0722 00:16:03.618614   41236 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 00:16:03.618623   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618631   41236 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 00:16:03.618640   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618649   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618663   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 00:16:03.618676   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 00:16:03.618690   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618698   41236 command_runner.go:130] >       "size": "150779692",
	I0722 00:16:03.618701   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.618708   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.618719   41236 command_runner.go:130] >       },
	I0722 00:16:03.618729   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618735   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618745   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618753   41236 command_runner.go:130] >     },
	I0722 00:16:03.618759   41236 command_runner.go:130] >     {
	I0722 00:16:03.618771   41236 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 00:16:03.618780   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618791   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 00:16:03.618799   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618803   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618815   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 00:16:03.618830   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 00:16:03.618839   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618849   41236 command_runner.go:130] >       "size": "117609954",
	I0722 00:16:03.618857   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.618867   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.618875   41236 command_runner.go:130] >       },
	I0722 00:16:03.618884   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618892   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618899   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618902   41236 command_runner.go:130] >     },
	I0722 00:16:03.618910   41236 command_runner.go:130] >     {
	I0722 00:16:03.618923   41236 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 00:16:03.618933   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618945   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 00:16:03.618954   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618963   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618992   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 00:16:03.619005   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 00:16:03.619010   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619020   41236 command_runner.go:130] >       "size": "112198984",
	I0722 00:16:03.619038   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619048   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.619056   41236 command_runner.go:130] >       },
	I0722 00:16:03.619066   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619075   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619084   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619090   41236 command_runner.go:130] >     },
	I0722 00:16:03.619094   41236 command_runner.go:130] >     {
	I0722 00:16:03.619105   41236 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 00:16:03.619115   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619125   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 00:16:03.619134   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619143   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619157   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 00:16:03.619173   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 00:16:03.619179   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619184   41236 command_runner.go:130] >       "size": "85953945",
	I0722 00:16:03.619193   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.619201   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619207   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619216   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619225   41236 command_runner.go:130] >     },
	I0722 00:16:03.619233   41236 command_runner.go:130] >     {
	I0722 00:16:03.619246   41236 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 00:16:03.619256   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619267   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 00:16:03.619273   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619277   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619291   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 00:16:03.619306   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 00:16:03.619315   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619324   41236 command_runner.go:130] >       "size": "63051080",
	I0722 00:16:03.619332   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619341   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.619349   41236 command_runner.go:130] >       },
	I0722 00:16:03.619358   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619372   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619379   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619383   41236 command_runner.go:130] >     },
	I0722 00:16:03.619391   41236 command_runner.go:130] >     {
	I0722 00:16:03.619402   41236 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 00:16:03.619412   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619419   41236 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 00:16:03.619427   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619433   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619445   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 00:16:03.619457   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 00:16:03.619464   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619470   41236 command_runner.go:130] >       "size": "750414",
	I0722 00:16:03.619478   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619483   41236 command_runner.go:130] >         "value": "65535"
	I0722 00:16:03.619490   41236 command_runner.go:130] >       },
	I0722 00:16:03.619496   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619505   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619510   41236 command_runner.go:130] >       "pinned": true
	I0722 00:16:03.619518   41236 command_runner.go:130] >     }
	I0722 00:16:03.619522   41236 command_runner.go:130] >   ]
	I0722 00:16:03.619529   41236 command_runner.go:130] > }
	I0722 00:16:03.619685   41236 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:16:03.619697   41236 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:16:03.619704   41236 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0722 00:16:03.619801   41236 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-332426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:16:03.619867   41236 ssh_runner.go:195] Run: crio config
	I0722 00:16:03.654561   41236 command_runner.go:130] ! time="2024-07-22 00:16:03.629007454Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0722 00:16:03.660435   41236 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0722 00:16:03.666731   41236 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0722 00:16:03.666759   41236 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0722 00:16:03.666765   41236 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0722 00:16:03.666769   41236 command_runner.go:130] > #
	I0722 00:16:03.666775   41236 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0722 00:16:03.666781   41236 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0722 00:16:03.666788   41236 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0722 00:16:03.666794   41236 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0722 00:16:03.666802   41236 command_runner.go:130] > # reload'.
	I0722 00:16:03.666811   41236 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0722 00:16:03.666820   41236 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0722 00:16:03.666831   41236 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0722 00:16:03.666839   41236 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0722 00:16:03.666844   41236 command_runner.go:130] > [crio]
	I0722 00:16:03.666853   41236 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0722 00:16:03.666864   41236 command_runner.go:130] > # containers images, in this directory.
	I0722 00:16:03.666871   41236 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0722 00:16:03.666885   41236 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0722 00:16:03.666893   41236 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0722 00:16:03.666903   41236 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0722 00:16:03.666912   41236 command_runner.go:130] > # imagestore = ""
	I0722 00:16:03.666922   41236 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0722 00:16:03.666932   41236 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0722 00:16:03.666941   41236 command_runner.go:130] > storage_driver = "overlay"
	I0722 00:16:03.666949   41236 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0722 00:16:03.666961   41236 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0722 00:16:03.666969   41236 command_runner.go:130] > storage_option = [
	I0722 00:16:03.666974   41236 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0722 00:16:03.666977   41236 command_runner.go:130] > ]
	I0722 00:16:03.666983   41236 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0722 00:16:03.666990   41236 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0722 00:16:03.666994   41236 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0722 00:16:03.667014   41236 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0722 00:16:03.667022   41236 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0722 00:16:03.667026   41236 command_runner.go:130] > # always happen on a node reboot
	I0722 00:16:03.667031   41236 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0722 00:16:03.667045   41236 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0722 00:16:03.667053   41236 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0722 00:16:03.667057   41236 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0722 00:16:03.667065   41236 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0722 00:16:03.667072   41236 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0722 00:16:03.667081   41236 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0722 00:16:03.667087   41236 command_runner.go:130] > # internal_wipe = true
	I0722 00:16:03.667094   41236 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0722 00:16:03.667101   41236 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0722 00:16:03.667105   41236 command_runner.go:130] > # internal_repair = false
	I0722 00:16:03.667113   41236 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0722 00:16:03.667119   41236 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0722 00:16:03.667126   41236 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0722 00:16:03.667131   41236 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0722 00:16:03.667138   41236 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0722 00:16:03.667142   41236 command_runner.go:130] > [crio.api]
	I0722 00:16:03.667149   41236 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0722 00:16:03.667154   41236 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0722 00:16:03.667161   41236 command_runner.go:130] > # IP address on which the stream server will listen.
	I0722 00:16:03.667165   41236 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0722 00:16:03.667172   41236 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0722 00:16:03.667178   41236 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0722 00:16:03.667182   41236 command_runner.go:130] > # stream_port = "0"
	I0722 00:16:03.667189   41236 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0722 00:16:03.667193   41236 command_runner.go:130] > # stream_enable_tls = false
	I0722 00:16:03.667201   41236 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0722 00:16:03.667205   41236 command_runner.go:130] > # stream_idle_timeout = ""
	I0722 00:16:03.667215   41236 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0722 00:16:03.667223   41236 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0722 00:16:03.667227   41236 command_runner.go:130] > # minutes.
	I0722 00:16:03.667233   41236 command_runner.go:130] > # stream_tls_cert = ""
	I0722 00:16:03.667238   41236 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0722 00:16:03.667258   41236 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0722 00:16:03.667265   41236 command_runner.go:130] > # stream_tls_key = ""
	I0722 00:16:03.667270   41236 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0722 00:16:03.667278   41236 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0722 00:16:03.667297   41236 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0722 00:16:03.667304   41236 command_runner.go:130] > # stream_tls_ca = ""
	I0722 00:16:03.667311   41236 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 00:16:03.667315   41236 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0722 00:16:03.667322   41236 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 00:16:03.667333   41236 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0722 00:16:03.667338   41236 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0722 00:16:03.667346   41236 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0722 00:16:03.667349   41236 command_runner.go:130] > [crio.runtime]
	I0722 00:16:03.667356   41236 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0722 00:16:03.667362   41236 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0722 00:16:03.667368   41236 command_runner.go:130] > # "nofile=1024:2048"
	I0722 00:16:03.667373   41236 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0722 00:16:03.667379   41236 command_runner.go:130] > # default_ulimits = [
	I0722 00:16:03.667382   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667388   41236 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0722 00:16:03.667394   41236 command_runner.go:130] > # no_pivot = false
	I0722 00:16:03.667401   41236 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0722 00:16:03.667410   41236 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0722 00:16:03.667414   41236 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0722 00:16:03.667422   41236 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0722 00:16:03.667426   41236 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0722 00:16:03.667436   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 00:16:03.667442   41236 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0722 00:16:03.667446   41236 command_runner.go:130] > # Cgroup setting for conmon
	I0722 00:16:03.667454   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0722 00:16:03.667458   41236 command_runner.go:130] > conmon_cgroup = "pod"
	I0722 00:16:03.667464   41236 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0722 00:16:03.667471   41236 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0722 00:16:03.667482   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 00:16:03.667492   41236 command_runner.go:130] > conmon_env = [
	I0722 00:16:03.667500   41236 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 00:16:03.667514   41236 command_runner.go:130] > ]
	I0722 00:16:03.667526   41236 command_runner.go:130] > # Additional environment variables to set for all the
	I0722 00:16:03.667538   41236 command_runner.go:130] > # containers. These are overridden if set in the
	I0722 00:16:03.667550   41236 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0722 00:16:03.667559   41236 command_runner.go:130] > # default_env = [
	I0722 00:16:03.667566   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667572   41236 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0722 00:16:03.667583   41236 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0722 00:16:03.667589   41236 command_runner.go:130] > # selinux = false
	I0722 00:16:03.667595   41236 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0722 00:16:03.667602   41236 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0722 00:16:03.667608   41236 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0722 00:16:03.667614   41236 command_runner.go:130] > # seccomp_profile = ""
	I0722 00:16:03.667619   41236 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0722 00:16:03.667626   41236 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0722 00:16:03.667632   41236 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0722 00:16:03.667639   41236 command_runner.go:130] > # which might increase security.
	I0722 00:16:03.667643   41236 command_runner.go:130] > # This option is currently deprecated,
	I0722 00:16:03.667651   41236 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0722 00:16:03.667655   41236 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0722 00:16:03.667664   41236 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0722 00:16:03.667670   41236 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0722 00:16:03.667676   41236 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0722 00:16:03.667683   41236 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0722 00:16:03.667693   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.667700   41236 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0722 00:16:03.667705   41236 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0722 00:16:03.667711   41236 command_runner.go:130] > # the cgroup blockio controller.
	I0722 00:16:03.667715   41236 command_runner.go:130] > # blockio_config_file = ""
	I0722 00:16:03.667723   41236 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0722 00:16:03.667727   41236 command_runner.go:130] > # blockio parameters.
	I0722 00:16:03.667733   41236 command_runner.go:130] > # blockio_reload = false
	I0722 00:16:03.667739   41236 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0722 00:16:03.667745   41236 command_runner.go:130] > # irqbalance daemon.
	I0722 00:16:03.667749   41236 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0722 00:16:03.667757   41236 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0722 00:16:03.667771   41236 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0722 00:16:03.667779   41236 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0722 00:16:03.667785   41236 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0722 00:16:03.667793   41236 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0722 00:16:03.667797   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.667801   41236 command_runner.go:130] > # rdt_config_file = ""
	I0722 00:16:03.667806   41236 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0722 00:16:03.667811   41236 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0722 00:16:03.667839   41236 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0722 00:16:03.667846   41236 command_runner.go:130] > # separate_pull_cgroup = ""
	I0722 00:16:03.667851   41236 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0722 00:16:03.667857   41236 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0722 00:16:03.667860   41236 command_runner.go:130] > # will be added.
	I0722 00:16:03.667865   41236 command_runner.go:130] > # default_capabilities = [
	I0722 00:16:03.667868   41236 command_runner.go:130] > # 	"CHOWN",
	I0722 00:16:03.667872   41236 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0722 00:16:03.667875   41236 command_runner.go:130] > # 	"FSETID",
	I0722 00:16:03.667879   41236 command_runner.go:130] > # 	"FOWNER",
	I0722 00:16:03.667882   41236 command_runner.go:130] > # 	"SETGID",
	I0722 00:16:03.667887   41236 command_runner.go:130] > # 	"SETUID",
	I0722 00:16:03.667891   41236 command_runner.go:130] > # 	"SETPCAP",
	I0722 00:16:03.667898   41236 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0722 00:16:03.667901   41236 command_runner.go:130] > # 	"KILL",
	I0722 00:16:03.667906   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667913   41236 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0722 00:16:03.667920   41236 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0722 00:16:03.667925   41236 command_runner.go:130] > # add_inheritable_capabilities = false
	I0722 00:16:03.667931   41236 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0722 00:16:03.667937   41236 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 00:16:03.667942   41236 command_runner.go:130] > default_sysctls = [
	I0722 00:16:03.667946   41236 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0722 00:16:03.667949   41236 command_runner.go:130] > ]
	I0722 00:16:03.667953   41236 command_runner.go:130] > # List of devices on the host that a
	I0722 00:16:03.667962   41236 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0722 00:16:03.667965   41236 command_runner.go:130] > # allowed_devices = [
	I0722 00:16:03.667968   41236 command_runner.go:130] > # 	"/dev/fuse",
	I0722 00:16:03.667976   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667983   41236 command_runner.go:130] > # List of additional devices. specified as
	I0722 00:16:03.667990   41236 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0722 00:16:03.667997   41236 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0722 00:16:03.668004   41236 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 00:16:03.668010   41236 command_runner.go:130] > # additional_devices = [
	I0722 00:16:03.668013   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668018   41236 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0722 00:16:03.668024   41236 command_runner.go:130] > # cdi_spec_dirs = [
	I0722 00:16:03.668027   41236 command_runner.go:130] > # 	"/etc/cdi",
	I0722 00:16:03.668031   41236 command_runner.go:130] > # 	"/var/run/cdi",
	I0722 00:16:03.668034   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668039   41236 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0722 00:16:03.668047   41236 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0722 00:16:03.668051   41236 command_runner.go:130] > # Defaults to false.
	I0722 00:16:03.668058   41236 command_runner.go:130] > # device_ownership_from_security_context = false
	I0722 00:16:03.668069   41236 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0722 00:16:03.668077   41236 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0722 00:16:03.668080   41236 command_runner.go:130] > # hooks_dir = [
	I0722 00:16:03.668085   41236 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0722 00:16:03.668091   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668096   41236 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0722 00:16:03.668102   41236 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0722 00:16:03.668108   41236 command_runner.go:130] > # its default mounts from the following two files:
	I0722 00:16:03.668111   41236 command_runner.go:130] > #
	I0722 00:16:03.668118   41236 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0722 00:16:03.668126   41236 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0722 00:16:03.668131   41236 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0722 00:16:03.668136   41236 command_runner.go:130] > #
	I0722 00:16:03.668141   41236 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0722 00:16:03.668150   41236 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0722 00:16:03.668156   41236 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0722 00:16:03.668163   41236 command_runner.go:130] > #      only add mounts it finds in this file.
	I0722 00:16:03.668166   41236 command_runner.go:130] > #
	I0722 00:16:03.668170   41236 command_runner.go:130] > # default_mounts_file = ""
	I0722 00:16:03.668175   41236 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0722 00:16:03.668185   41236 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0722 00:16:03.668191   41236 command_runner.go:130] > pids_limit = 1024
	I0722 00:16:03.668197   41236 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0722 00:16:03.668205   41236 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0722 00:16:03.668211   41236 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0722 00:16:03.668220   41236 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0722 00:16:03.668226   41236 command_runner.go:130] > # log_size_max = -1
	I0722 00:16:03.668233   41236 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0722 00:16:03.668262   41236 command_runner.go:130] > # log_to_journald = false
	I0722 00:16:03.668275   41236 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0722 00:16:03.668280   41236 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0722 00:16:03.668286   41236 command_runner.go:130] > # Path to directory for container attach sockets.
	I0722 00:16:03.668291   41236 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0722 00:16:03.668298   41236 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0722 00:16:03.668302   41236 command_runner.go:130] > # bind_mount_prefix = ""
	I0722 00:16:03.668309   41236 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0722 00:16:03.668314   41236 command_runner.go:130] > # read_only = false
	I0722 00:16:03.668320   41236 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0722 00:16:03.668334   41236 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0722 00:16:03.668340   41236 command_runner.go:130] > # live configuration reload.
	I0722 00:16:03.668344   41236 command_runner.go:130] > # log_level = "info"
	I0722 00:16:03.668350   41236 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0722 00:16:03.668355   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.668361   41236 command_runner.go:130] > # log_filter = ""
	I0722 00:16:03.668367   41236 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0722 00:16:03.668376   41236 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0722 00:16:03.668379   41236 command_runner.go:130] > # separated by comma.
	I0722 00:16:03.668387   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668392   41236 command_runner.go:130] > # uid_mappings = ""
	I0722 00:16:03.668398   41236 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0722 00:16:03.668405   41236 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0722 00:16:03.668409   41236 command_runner.go:130] > # separated by comma.
	I0722 00:16:03.668418   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668424   41236 command_runner.go:130] > # gid_mappings = ""
	I0722 00:16:03.668430   41236 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0722 00:16:03.668438   41236 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 00:16:03.668448   41236 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 00:16:03.668457   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668462   41236 command_runner.go:130] > # minimum_mappable_uid = -1
	I0722 00:16:03.668467   41236 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0722 00:16:03.668473   41236 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 00:16:03.668481   41236 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 00:16:03.668494   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668506   41236 command_runner.go:130] > # minimum_mappable_gid = -1
	I0722 00:16:03.668518   41236 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0722 00:16:03.668529   41236 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0722 00:16:03.668540   41236 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0722 00:16:03.668548   41236 command_runner.go:130] > # ctr_stop_timeout = 30
	I0722 00:16:03.668560   41236 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0722 00:16:03.668569   41236 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0722 00:16:03.668576   41236 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0722 00:16:03.668580   41236 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0722 00:16:03.668584   41236 command_runner.go:130] > drop_infra_ctr = false
	I0722 00:16:03.668590   41236 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0722 00:16:03.668595   41236 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0722 00:16:03.668601   41236 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0722 00:16:03.668605   41236 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0722 00:16:03.668611   41236 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0722 00:16:03.668616   41236 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0722 00:16:03.668621   41236 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0722 00:16:03.668625   41236 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0722 00:16:03.668629   41236 command_runner.go:130] > # shared_cpuset = ""
	I0722 00:16:03.668634   41236 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0722 00:16:03.668638   41236 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0722 00:16:03.668642   41236 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0722 00:16:03.668649   41236 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0722 00:16:03.668653   41236 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0722 00:16:03.668658   41236 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0722 00:16:03.668665   41236 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0722 00:16:03.668669   41236 command_runner.go:130] > # enable_criu_support = false
	I0722 00:16:03.668674   41236 command_runner.go:130] > # Enable/disable the generation of the container,
	I0722 00:16:03.668679   41236 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0722 00:16:03.668692   41236 command_runner.go:130] > # enable_pod_events = false
	I0722 00:16:03.668701   41236 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 00:16:03.668709   41236 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 00:16:03.668717   41236 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0722 00:16:03.668721   41236 command_runner.go:130] > # default_runtime = "runc"
	I0722 00:16:03.668727   41236 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0722 00:16:03.668734   41236 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0722 00:16:03.668745   41236 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0722 00:16:03.668752   41236 command_runner.go:130] > # creation as a file is not desired either.
	I0722 00:16:03.668762   41236 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0722 00:16:03.668767   41236 command_runner.go:130] > # the hostname is being managed dynamically.
	I0722 00:16:03.668773   41236 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0722 00:16:03.668776   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668781   41236 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0722 00:16:03.668789   41236 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0722 00:16:03.668797   41236 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0722 00:16:03.668803   41236 command_runner.go:130] > # Each entry in the table should follow the format:
	I0722 00:16:03.668807   41236 command_runner.go:130] > #
	I0722 00:16:03.668811   41236 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0722 00:16:03.668818   41236 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0722 00:16:03.668861   41236 command_runner.go:130] > # runtime_type = "oci"
	I0722 00:16:03.668867   41236 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0722 00:16:03.668872   41236 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0722 00:16:03.668876   41236 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0722 00:16:03.668881   41236 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0722 00:16:03.668887   41236 command_runner.go:130] > # monitor_env = []
	I0722 00:16:03.668891   41236 command_runner.go:130] > # privileged_without_host_devices = false
	I0722 00:16:03.668895   41236 command_runner.go:130] > # allowed_annotations = []
	I0722 00:16:03.668899   41236 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0722 00:16:03.668905   41236 command_runner.go:130] > # Where:
	I0722 00:16:03.668910   41236 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0722 00:16:03.668916   41236 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0722 00:16:03.668924   41236 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0722 00:16:03.668930   41236 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0722 00:16:03.668936   41236 command_runner.go:130] > #   in $PATH.
	I0722 00:16:03.668941   41236 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0722 00:16:03.668950   41236 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0722 00:16:03.668958   41236 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0722 00:16:03.668961   41236 command_runner.go:130] > #   state.
	I0722 00:16:03.668967   41236 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0722 00:16:03.668975   41236 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0722 00:16:03.668980   41236 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0722 00:16:03.668985   41236 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0722 00:16:03.668992   41236 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0722 00:16:03.668998   41236 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0722 00:16:03.669006   41236 command_runner.go:130] > #   The currently recognized values are:
	I0722 00:16:03.669012   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0722 00:16:03.669020   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0722 00:16:03.669025   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0722 00:16:03.669033   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0722 00:16:03.669040   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0722 00:16:03.669047   41236 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0722 00:16:03.669053   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0722 00:16:03.669061   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0722 00:16:03.669067   41236 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0722 00:16:03.669072   41236 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0722 00:16:03.669077   41236 command_runner.go:130] > #   deprecated option "conmon".
	I0722 00:16:03.669083   41236 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0722 00:16:03.669090   41236 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0722 00:16:03.669096   41236 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0722 00:16:03.669101   41236 command_runner.go:130] > #   should be moved to the container's cgroup
	I0722 00:16:03.669107   41236 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0722 00:16:03.669112   41236 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0722 00:16:03.669118   41236 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0722 00:16:03.669124   41236 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0722 00:16:03.669127   41236 command_runner.go:130] > #
	I0722 00:16:03.669131   41236 command_runner.go:130] > # Using the seccomp notifier feature:
	I0722 00:16:03.669136   41236 command_runner.go:130] > #
	I0722 00:16:03.669142   41236 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0722 00:16:03.669150   41236 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0722 00:16:03.669153   41236 command_runner.go:130] > #
	I0722 00:16:03.669158   41236 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0722 00:16:03.669171   41236 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0722 00:16:03.669175   41236 command_runner.go:130] > #
	I0722 00:16:03.669180   41236 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0722 00:16:03.669186   41236 command_runner.go:130] > # feature.
	I0722 00:16:03.669189   41236 command_runner.go:130] > #
	I0722 00:16:03.669194   41236 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0722 00:16:03.669200   41236 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0722 00:16:03.669205   41236 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0722 00:16:03.669213   41236 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0722 00:16:03.669220   41236 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0722 00:16:03.669223   41236 command_runner.go:130] > #
	I0722 00:16:03.669228   41236 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0722 00:16:03.669236   41236 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0722 00:16:03.669239   41236 command_runner.go:130] > #
	I0722 00:16:03.669244   41236 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0722 00:16:03.669249   41236 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0722 00:16:03.669253   41236 command_runner.go:130] > #
	I0722 00:16:03.669258   41236 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0722 00:16:03.669266   41236 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0722 00:16:03.669269   41236 command_runner.go:130] > # limitation.
	I0722 00:16:03.669274   41236 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0722 00:16:03.669281   41236 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0722 00:16:03.669284   41236 command_runner.go:130] > runtime_type = "oci"
	I0722 00:16:03.669288   41236 command_runner.go:130] > runtime_root = "/run/runc"
	I0722 00:16:03.669296   41236 command_runner.go:130] > runtime_config_path = ""
	I0722 00:16:03.669303   41236 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0722 00:16:03.669307   41236 command_runner.go:130] > monitor_cgroup = "pod"
	I0722 00:16:03.669311   41236 command_runner.go:130] > monitor_exec_cgroup = ""
	I0722 00:16:03.669315   41236 command_runner.go:130] > monitor_env = [
	I0722 00:16:03.669320   41236 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 00:16:03.669329   41236 command_runner.go:130] > ]
	I0722 00:16:03.669333   41236 command_runner.go:130] > privileged_without_host_devices = false
	I0722 00:16:03.669340   41236 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0722 00:16:03.669347   41236 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0722 00:16:03.669352   41236 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0722 00:16:03.669361   41236 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0722 00:16:03.669372   41236 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0722 00:16:03.669380   41236 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0722 00:16:03.669388   41236 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0722 00:16:03.669397   41236 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0722 00:16:03.669403   41236 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0722 00:16:03.669409   41236 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0722 00:16:03.669412   41236 command_runner.go:130] > # Example:
	I0722 00:16:03.669416   41236 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0722 00:16:03.669420   41236 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0722 00:16:03.669425   41236 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0722 00:16:03.669431   41236 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0722 00:16:03.669434   41236 command_runner.go:130] > # cpuset = 0
	I0722 00:16:03.669438   41236 command_runner.go:130] > # cpushares = "0-1"
	I0722 00:16:03.669441   41236 command_runner.go:130] > # Where:
	I0722 00:16:03.669445   41236 command_runner.go:130] > # The workload name is workload-type.
	I0722 00:16:03.669451   41236 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0722 00:16:03.669456   41236 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0722 00:16:03.669461   41236 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0722 00:16:03.669468   41236 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0722 00:16:03.669473   41236 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0722 00:16:03.669477   41236 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0722 00:16:03.669483   41236 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0722 00:16:03.669486   41236 command_runner.go:130] > # Default value is set to true
	I0722 00:16:03.669490   41236 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0722 00:16:03.669495   41236 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0722 00:16:03.669499   41236 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0722 00:16:03.669503   41236 command_runner.go:130] > # Default value is set to 'false'
	I0722 00:16:03.669507   41236 command_runner.go:130] > # disable_hostport_mapping = false
	I0722 00:16:03.669512   41236 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0722 00:16:03.669514   41236 command_runner.go:130] > #
	I0722 00:16:03.669520   41236 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0722 00:16:03.669525   41236 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0722 00:16:03.669530   41236 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0722 00:16:03.669536   41236 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0722 00:16:03.669541   41236 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0722 00:16:03.669544   41236 command_runner.go:130] > [crio.image]
	I0722 00:16:03.669553   41236 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0722 00:16:03.669557   41236 command_runner.go:130] > # default_transport = "docker://"
	I0722 00:16:03.669563   41236 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0722 00:16:03.669568   41236 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0722 00:16:03.669572   41236 command_runner.go:130] > # global_auth_file = ""
	I0722 00:16:03.669579   41236 command_runner.go:130] > # The image used to instantiate infra containers.
	I0722 00:16:03.669587   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.669591   41236 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0722 00:16:03.669597   41236 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0722 00:16:03.669604   41236 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0722 00:16:03.669608   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.669616   41236 command_runner.go:130] > # pause_image_auth_file = ""
	I0722 00:16:03.669622   41236 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0722 00:16:03.669628   41236 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0722 00:16:03.669634   41236 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0722 00:16:03.669640   41236 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0722 00:16:03.669648   41236 command_runner.go:130] > # pause_command = "/pause"
	I0722 00:16:03.669656   41236 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0722 00:16:03.669662   41236 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0722 00:16:03.669667   41236 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0722 00:16:03.669675   41236 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0722 00:16:03.669682   41236 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0722 00:16:03.669690   41236 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0722 00:16:03.669693   41236 command_runner.go:130] > # pinned_images = [
	I0722 00:16:03.669699   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669705   41236 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0722 00:16:03.669712   41236 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0722 00:16:03.669718   41236 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0722 00:16:03.669724   41236 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0722 00:16:03.669729   41236 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0722 00:16:03.669733   41236 command_runner.go:130] > # signature_policy = ""
	I0722 00:16:03.669738   41236 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0722 00:16:03.669747   41236 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0722 00:16:03.669753   41236 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0722 00:16:03.669761   41236 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0722 00:16:03.669766   41236 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0722 00:16:03.669776   41236 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0722 00:16:03.669782   41236 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0722 00:16:03.669790   41236 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0722 00:16:03.669794   41236 command_runner.go:130] > # changing them here.
	I0722 00:16:03.669798   41236 command_runner.go:130] > # insecure_registries = [
	I0722 00:16:03.669801   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669807   41236 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0722 00:16:03.669812   41236 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0722 00:16:03.669816   41236 command_runner.go:130] > # image_volumes = "mkdir"
	I0722 00:16:03.669823   41236 command_runner.go:130] > # Temporary directory to use for storing big files
	I0722 00:16:03.669827   41236 command_runner.go:130] > # big_files_temporary_dir = ""
	I0722 00:16:03.669839   41236 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0722 00:16:03.669843   41236 command_runner.go:130] > # CNI plugins.
	I0722 00:16:03.669847   41236 command_runner.go:130] > [crio.network]
	I0722 00:16:03.669852   41236 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0722 00:16:03.669858   41236 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0722 00:16:03.669862   41236 command_runner.go:130] > # cni_default_network = ""
	I0722 00:16:03.669869   41236 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0722 00:16:03.669873   41236 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0722 00:16:03.669878   41236 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0722 00:16:03.669884   41236 command_runner.go:130] > # plugin_dirs = [
	I0722 00:16:03.669887   41236 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0722 00:16:03.669890   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669895   41236 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0722 00:16:03.669900   41236 command_runner.go:130] > [crio.metrics]
	I0722 00:16:03.669904   41236 command_runner.go:130] > # Globally enable or disable metrics support.
	I0722 00:16:03.669908   41236 command_runner.go:130] > enable_metrics = true
	I0722 00:16:03.669912   41236 command_runner.go:130] > # Specify enabled metrics collectors.
	I0722 00:16:03.669916   41236 command_runner.go:130] > # Per default all metrics are enabled.
	I0722 00:16:03.669922   41236 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0722 00:16:03.669930   41236 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0722 00:16:03.669935   41236 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0722 00:16:03.669941   41236 command_runner.go:130] > # metrics_collectors = [
	I0722 00:16:03.669944   41236 command_runner.go:130] > # 	"operations",
	I0722 00:16:03.669949   41236 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0722 00:16:03.669953   41236 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0722 00:16:03.669963   41236 command_runner.go:130] > # 	"operations_errors",
	I0722 00:16:03.669967   41236 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0722 00:16:03.669974   41236 command_runner.go:130] > # 	"image_pulls_by_name",
	I0722 00:16:03.669977   41236 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0722 00:16:03.669981   41236 command_runner.go:130] > # 	"image_pulls_failures",
	I0722 00:16:03.669988   41236 command_runner.go:130] > # 	"image_pulls_successes",
	I0722 00:16:03.669991   41236 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0722 00:16:03.669995   41236 command_runner.go:130] > # 	"image_layer_reuse",
	I0722 00:16:03.669999   41236 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0722 00:16:03.670003   41236 command_runner.go:130] > # 	"containers_oom_total",
	I0722 00:16:03.670006   41236 command_runner.go:130] > # 	"containers_oom",
	I0722 00:16:03.670010   41236 command_runner.go:130] > # 	"processes_defunct",
	I0722 00:16:03.670013   41236 command_runner.go:130] > # 	"operations_total",
	I0722 00:16:03.670017   41236 command_runner.go:130] > # 	"operations_latency_seconds",
	I0722 00:16:03.670021   41236 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0722 00:16:03.670025   41236 command_runner.go:130] > # 	"operations_errors_total",
	I0722 00:16:03.670029   41236 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0722 00:16:03.670033   41236 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0722 00:16:03.670040   41236 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0722 00:16:03.670044   41236 command_runner.go:130] > # 	"image_pulls_success_total",
	I0722 00:16:03.670050   41236 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0722 00:16:03.670056   41236 command_runner.go:130] > # 	"containers_oom_count_total",
	I0722 00:16:03.670061   41236 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0722 00:16:03.670065   41236 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0722 00:16:03.670071   41236 command_runner.go:130] > # ]
	I0722 00:16:03.670076   41236 command_runner.go:130] > # The port on which the metrics server will listen.
	I0722 00:16:03.670079   41236 command_runner.go:130] > # metrics_port = 9090
	I0722 00:16:03.670084   41236 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0722 00:16:03.670089   41236 command_runner.go:130] > # metrics_socket = ""
	I0722 00:16:03.670094   41236 command_runner.go:130] > # The certificate for the secure metrics server.
	I0722 00:16:03.670100   41236 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0722 00:16:03.670108   41236 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0722 00:16:03.670112   41236 command_runner.go:130] > # certificate on any modification event.
	I0722 00:16:03.670115   41236 command_runner.go:130] > # metrics_cert = ""
	I0722 00:16:03.670120   41236 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0722 00:16:03.670127   41236 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0722 00:16:03.670135   41236 command_runner.go:130] > # metrics_key = ""
	I0722 00:16:03.670142   41236 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0722 00:16:03.670146   41236 command_runner.go:130] > [crio.tracing]
	I0722 00:16:03.670151   41236 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0722 00:16:03.670155   41236 command_runner.go:130] > # enable_tracing = false
	I0722 00:16:03.670159   41236 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0722 00:16:03.670164   41236 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0722 00:16:03.670170   41236 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0722 00:16:03.670176   41236 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0722 00:16:03.670180   41236 command_runner.go:130] > # CRI-O NRI configuration.
	I0722 00:16:03.670183   41236 command_runner.go:130] > [crio.nri]
	I0722 00:16:03.670188   41236 command_runner.go:130] > # Globally enable or disable NRI.
	I0722 00:16:03.670191   41236 command_runner.go:130] > # enable_nri = false
	I0722 00:16:03.670196   41236 command_runner.go:130] > # NRI socket to listen on.
	I0722 00:16:03.670200   41236 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0722 00:16:03.670204   41236 command_runner.go:130] > # NRI plugin directory to use.
	I0722 00:16:03.670208   41236 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0722 00:16:03.670213   41236 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0722 00:16:03.670219   41236 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0722 00:16:03.670224   41236 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0722 00:16:03.670230   41236 command_runner.go:130] > # nri_disable_connections = false
	I0722 00:16:03.670235   41236 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0722 00:16:03.670240   41236 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0722 00:16:03.670246   41236 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0722 00:16:03.670250   41236 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0722 00:16:03.670258   41236 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0722 00:16:03.670262   41236 command_runner.go:130] > [crio.stats]
	I0722 00:16:03.670271   41236 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0722 00:16:03.670279   41236 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0722 00:16:03.670283   41236 command_runner.go:130] > # stats_collection_period = 0
	I0722 00:16:03.670423   41236 cni.go:84] Creating CNI manager for ""
	I0722 00:16:03.670434   41236 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 00:16:03.670445   41236 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:16:03.670467   41236 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-332426 NodeName:multinode-332426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:16:03.670585   41236 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-332426"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:16:03.670669   41236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:16:03.680252   41236 command_runner.go:130] > kubeadm
	I0722 00:16:03.680269   41236 command_runner.go:130] > kubectl
	I0722 00:16:03.680275   41236 command_runner.go:130] > kubelet
	I0722 00:16:03.680317   41236 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:16:03.680368   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:16:03.689319   41236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0722 00:16:03.705214   41236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:16:03.720584   41236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 00:16:03.735802   41236 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0722 00:16:03.739155   41236 command_runner.go:130] > 192.168.39.67	control-plane.minikube.internal
	I0722 00:16:03.739295   41236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:16:03.870043   41236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:16:03.884645   41236 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426 for IP: 192.168.39.67
	I0722 00:16:03.884664   41236 certs.go:194] generating shared ca certs ...
	I0722 00:16:03.884683   41236 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:16:03.884841   41236 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:16:03.884892   41236 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:16:03.884906   41236 certs.go:256] generating profile certs ...
	I0722 00:16:03.884999   41236 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/client.key
	I0722 00:16:03.885075   41236 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key.b93420c1
	I0722 00:16:03.885131   41236 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key
	I0722 00:16:03.885144   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:16:03.885169   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:16:03.885188   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:16:03.885203   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:16:03.885226   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:16:03.885253   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:16:03.885272   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:16:03.885289   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:16:03.885354   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:16:03.885398   41236 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:16:03.885412   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:16:03.885451   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:16:03.885491   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:16:03.885521   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:16:03.885581   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:16:03.885635   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:03.885654   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0722 00:16:03.885672   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0722 00:16:03.886960   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:16:03.910693   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:16:03.932387   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:16:03.953903   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:16:03.975334   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:16:03.997368   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:16:04.018872   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:16:04.039827   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:16:04.061057   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:16:04.082572   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:16:04.103991   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:16:04.125131   41236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:16:04.140030   41236 ssh_runner.go:195] Run: openssl version
	I0722 00:16:04.145229   41236 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 00:16:04.145361   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:16:04.154826   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.158953   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.158988   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.159038   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.164187   41236 command_runner.go:130] > 51391683
	I0722 00:16:04.164254   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:16:04.172546   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:16:04.182156   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186005   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186097   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186147   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.191191   41236 command_runner.go:130] > 3ec20f2e
	I0722 00:16:04.191257   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:16:04.199765   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:16:04.209604   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213560   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213588   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213627   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.218764   41236 command_runner.go:130] > b5213941
	I0722 00:16:04.218927   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:16:04.227452   41236 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:16:04.231500   41236 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:16:04.231526   41236 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0722 00:16:04.231535   41236 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0722 00:16:04.231545   41236 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 00:16:04.231554   41236 command_runner.go:130] > Access: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231562   41236 command_runner.go:130] > Modify: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231569   41236 command_runner.go:130] > Change: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231576   41236 command_runner.go:130] >  Birth: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231634   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:16:04.236788   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.236924   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:16:04.242056   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.242100   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:16:04.247211   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.247252   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:16:04.252139   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.252312   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:16:04.257176   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.257318   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:16:04.262426   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.262489   41236 kubeadm.go:392] StartCluster: {Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:16:04.262641   41236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:16:04.262693   41236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:16:04.296982   41236 command_runner.go:130] > bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846
	I0722 00:16:04.297014   41236 command_runner.go:130] > 5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de
	I0722 00:16:04.297023   41236 command_runner.go:130] > be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7
	I0722 00:16:04.297035   41236 command_runner.go:130] > 84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421
	I0722 00:16:04.297043   41236 command_runner.go:130] > cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7
	I0722 00:16:04.297052   41236 command_runner.go:130] > 6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24
	I0722 00:16:04.297060   41236 command_runner.go:130] > d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b
	I0722 00:16:04.297071   41236 command_runner.go:130] > 0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea
	I0722 00:16:04.297100   41236 cri.go:89] found id: "bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846"
	I0722 00:16:04.297110   41236 cri.go:89] found id: "5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de"
	I0722 00:16:04.297115   41236 cri.go:89] found id: "be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7"
	I0722 00:16:04.297119   41236 cri.go:89] found id: "84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421"
	I0722 00:16:04.297123   41236 cri.go:89] found id: "cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7"
	I0722 00:16:04.297128   41236 cri.go:89] found id: "6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24"
	I0722 00:16:04.297132   41236 cri.go:89] found id: "d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b"
	I0722 00:16:04.297136   41236 cri.go:89] found id: "0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea"
	I0722 00:16:04.297140   41236 cri.go:89] found id: ""
	I0722 00:16:04.297188   41236 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.608275711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607471608255411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6fd9e2f-52d1-4dd6-b21c-d1cd625d4ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.608986722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0459dc34-2ddc-4472-a66e-67dfcbc90c75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.609043454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0459dc34-2ddc-4472-a66e-67dfcbc90c75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.609426572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0459dc34-2ddc-4472-a66e-67dfcbc90c75 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.646513871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55f85a0e-2628-42f6-9073-7e2f5fbaf51b name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.646602780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55f85a0e-2628-42f6-9073-7e2f5fbaf51b name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.647891269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3ff9897-9776-40ff-acb1-c880e17aa73e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.648590095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607471648286360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3ff9897-9776-40ff-acb1-c880e17aa73e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.649047459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=278c1f7c-ce35-4e48-8455-0dd6e20e4599 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.649105240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=278c1f7c-ce35-4e48-8455-0dd6e20e4599 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.649489873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=278c1f7c-ce35-4e48-8455-0dd6e20e4599 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.692233246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f3a995b-1c08-4e0f-b033-e5c3a4e2fe5b name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.692397877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f3a995b-1c08-4e0f-b033-e5c3a4e2fe5b name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.693513847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b0733e0-ed6f-4a43-b421-0e3cd73ba3a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.693920276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607471693900488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b0733e0-ed6f-4a43-b421-0e3cd73ba3a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.694638977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4a4b8cb-995c-4384-8bc4-63760ca05929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.694695943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4a4b8cb-995c-4384-8bc4-63760ca05929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.695065987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4a4b8cb-995c-4384-8bc4-63760ca05929 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.734780471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d77485fe-954e-4c01-a3f1-59f41445ebb3 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.734854244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d77485fe-954e-4c01-a3f1-59f41445ebb3 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.735905808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=352547ee-78b8-4afd-93be-7c5c1054a8bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.736499504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607471736466453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=352547ee-78b8-4afd-93be-7c5c1054a8bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.737450967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=979db399-89f4-4c6e-989a-155802c3b47c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.737529520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=979db399-89f4-4c6e-989a-155802c3b47c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:17:51 multinode-332426 crio[2855]: time="2024-07-22 00:17:51.738021898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=979db399-89f4-4c6e-989a-155802c3b47c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	74e365f9a940e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   891d28ebec68e       busybox-fc5497c4f-d4fqv
	54bcf11731a98       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   deb2045cfa5a5       kindnet-8hmt4
	0376b5ca7df77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   a5d86fab1d641       coredns-7db6d8ff4d-kgmn4
	45562da2aee19       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   26cfa9830a473       kube-proxy-lj2fx
	edc446bb04896       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   3b7dccc46d65a       storage-provisioner
	d82aa1b89b938       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   a1eaa28dfbc32       etcd-multinode-332426
	ccf2fa4343e40       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   abf2787e3e3dd       kube-controller-manager-multinode-332426
	0d3a51dbfdecc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   76bcada916ab2       kube-scheduler-multinode-332426
	8ca1b020e36a1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   611b738c9a429       kube-apiserver-multinode-332426
	865d5fd7e8026       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   0e8b20f046d00       busybox-fc5497c4f-d4fqv
	bf77a115bd3b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   5b314dc7aee93       coredns-7db6d8ff4d-kgmn4
	5ffa33729a9f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   618c48779675e       storage-provisioner
	be5af95309f16       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   841fb5d9879ce       kindnet-8hmt4
	84be68af94193       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   7cbccdfd31f6c       kube-proxy-lj2fx
	cb8198ba979fc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   bc26157c8e97b       kube-apiserver-multinode-332426
	6640fb78d9d74       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   6023bbb4eb5ac       etcd-multinode-332426
	d1fe9fff883b0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   d5ba48a08b5b1       kube-scheduler-multinode-332426
	0b655b503e2b5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   8943995b24cf1       kube-controller-manager-multinode-332426
	
	
	==> coredns [0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35252 - 30027 "HINFO IN 2016048654068247314.6170083794807566555. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015554366s
	
	
	==> coredns [bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846] <==
	[INFO] 10.244.1.2:50179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621508s
	[INFO] 10.244.1.2:59257 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086082s
	[INFO] 10.244.1.2:54360 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084272s
	[INFO] 10.244.1.2:49805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221189s
	[INFO] 10.244.1.2:56137 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081096s
	[INFO] 10.244.1.2:39288 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064923s
	[INFO] 10.244.1.2:53603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168163s
	[INFO] 10.244.0.3:33445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088238s
	[INFO] 10.244.0.3:60751 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080196s
	[INFO] 10.244.0.3:49851 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050668s
	[INFO] 10.244.0.3:58365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097549s
	[INFO] 10.244.1.2:37491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122295s
	[INFO] 10.244.1.2:45475 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150054s
	[INFO] 10.244.1.2:47471 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089484s
	[INFO] 10.244.1.2:50935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088533s
	[INFO] 10.244.0.3:32821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115775s
	[INFO] 10.244.0.3:33144 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174417s
	[INFO] 10.244.0.3:40417 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086886s
	[INFO] 10.244.0.3:36269 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121613s
	[INFO] 10.244.1.2:56272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111783s
	[INFO] 10.244.1.2:58196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109756s
	[INFO] 10.244.1.2:41786 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007171s
	[INFO] 10.244.1.2:55219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083701s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-332426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-332426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-332426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_09_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-332426
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:17:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-332426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72d19522baaf4f0f8c46bd95cf97927b
	  System UUID:                72d19522-baaf-4f0f-8c46-bd95cf97927b
	  Boot ID:                    a7af36a1-0feb-4ad7-b1f5-c8b7a5023aa8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d4fqv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 coredns-7db6d8ff4d-kgmn4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m9s
	  kube-system                 etcd-multinode-332426                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m23s
	  kube-system                 kindnet-8hmt4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m9s
	  kube-system                 kube-apiserver-multinode-332426             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-multinode-332426    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-lj2fx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-scheduler-multinode-332426             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m7s                   kube-proxy       
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s (x7 over 8m29s)  kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m23s                  kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m10s                  node-controller  Node multinode-332426 event: Registered Node multinode-332426 in Controller
	  Normal  NodeReady                7m54s                  kubelet          Node multinode-332426 status is now: NodeReady
	  Normal  Starting                 107s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)    kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)    kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)    kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                    node-controller  Node multinode-332426 event: Registered Node multinode-332426 in Controller
	
	
	Name:               multinode-332426-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-332426-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-332426
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_16_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:16:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-332426-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:17:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:16:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:16:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:16:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-332426-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e55c65b8a45541329327c2cf589759eb
	  System UUID:                e55c65b8-a455-4132-9327-c2cf589759eb
	  Boot ID:                    63ca0462-1dcc-4320-ab37-0c4e5a009724
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6ldsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-fx662              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m26s
	  kube-system                 kube-proxy-rjx57           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m21s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m26s (x2 over 7m26s)  kubelet     Node multinode-332426-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s (x2 over 7m26s)  kubelet     Node multinode-332426-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m26s (x2 over 7m26s)  kubelet     Node multinode-332426-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m7s                   kubelet     Node multinode-332426-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-332426-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-332426-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-332426-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-332426-m02 status is now: NodeReady
	
	
	Name:               multinode-332426-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-332426-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-332426
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_17_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:17:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-332426-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:17:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:17:48 +0000   Mon, 22 Jul 2024 00:17:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:17:48 +0000   Mon, 22 Jul 2024 00:17:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:17:48 +0000   Mon, 22 Jul 2024 00:17:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:17:48 +0000   Mon, 22 Jul 2024 00:17:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    multinode-332426-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ff2e7f9a5164690912ceec97cac6f65
	  System UUID:                2ff2e7f9-a516-4690-912c-eec97cac6f65
	  Boot ID:                    f155dabe-aed7-4b39-9ead-551c2108510f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5szrb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-q4dfh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m28s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet     Node multinode-332426-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet     Node multinode-332426-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet     Node multinode-332426-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-332426-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-332426-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-332426-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-332426-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056950] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051918] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.176767] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.113851] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.249101] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.875821] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +3.839021] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.059762] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.476364] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.084302] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.375047] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.701257] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +5.138911] kauditd_printk_skb: 59 callbacks suppressed
	[Jul22 00:10] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 00:16] systemd-fstab-generator[2774]: Ignoring "noauto" option for root device
	[  +0.143610] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.169574] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.138935] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
	[  +0.259073] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.652540] systemd-fstab-generator[2938]: Ignoring "noauto" option for root device
	[  +1.832847] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +4.672332] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.226435] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.995520] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +17.535189] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24] <==
	{"level":"warn","ts":"2024-07-22T00:10:26.547049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.774046ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3536892338775504069 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-332426-m02.17e460780bfa017c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-332426-m02.17e460780bfa017c\" value_size:640 lease:3536892338775503057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T00:10:26.547872Z","caller":"traceutil/trace.go:171","msg":"trace[290633637] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"174.947128ms","start":"2024-07-22T00:10:26.372891Z","end":"2024-07-22T00:10:26.547839Z","steps":["trace[290633637] 'process raft request'  (duration: 174.888271ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:10:26.54811Z","caller":"traceutil/trace.go:171","msg":"trace[804757579] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"246.522571ms","start":"2024-07-22T00:10:26.301575Z","end":"2024-07-22T00:10:26.548098Z","steps":["trace[804757579] 'process raft request'  (duration: 75.548913ms)","trace[804757579] 'compare'  (duration: 169.487172ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:10:26.548346Z","caller":"traceutil/trace.go:171","msg":"trace[2035800605] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"245.58864ms","start":"2024-07-22T00:10:26.302714Z","end":"2024-07-22T00:10:26.548303Z","steps":["trace[2035800605] 'read index received'  (duration: 74.415418ms)","trace[2035800605] 'applied index is now lower than readState.Index'  (duration: 171.172336ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:10:26.548682Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.948916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-332426-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:10:26.548994Z","caller":"traceutil/trace.go:171","msg":"trace[2127525726] range","detail":"{range_begin:/registry/csinodes/multinode-332426-m02; range_end:; response_count:0; response_revision:491; }","duration":"246.252369ms","start":"2024-07-22T00:10:26.302694Z","end":"2024-07-22T00:10:26.548946Z","steps":["trace[2127525726] 'agreement among raft nodes before linearized reading'  (duration: 245.866074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:10:26.549762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.003849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-332426-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-22T00:10:26.550014Z","caller":"traceutil/trace.go:171","msg":"trace[1542341300] range","detail":"{range_begin:/registry/minions/multinode-332426-m02; range_end:; response_count:1; response_revision:491; }","duration":"247.260778ms","start":"2024-07-22T00:10:26.302739Z","end":"2024-07-22T00:10:26.55Z","steps":["trace[1542341300] 'agreement among raft nodes before linearized reading'  (duration: 246.988231ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:11:18.944262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.525279ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3536892338775504511 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-332426-m03.17e460843f67a3a7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-332426-m03.17e460843f67a3a7\" value_size:642 lease:3536892338775504109 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T00:11:18.944527Z","caller":"traceutil/trace.go:171","msg":"trace[2028451399] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:662; }","duration":"147.873927ms","start":"2024-07-22T00:11:18.796643Z","end":"2024-07-22T00:11:18.944517Z","steps":["trace[2028451399] 'read index received'  (duration: 146.414006ms)","trace[2028451399] 'applied index is now lower than readState.Index'  (duration: 1.459316ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:11:18.944598Z","caller":"traceutil/trace.go:171","msg":"trace[2028221205] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"240.001189ms","start":"2024-07-22T00:11:18.70459Z","end":"2024-07-22T00:11:18.944591Z","steps":["trace[2028221205] 'process raft request'  (duration: 74.072658ms)","trace[2028221205] 'compare'  (duration: 165.354087ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:11:18.944796Z","caller":"traceutil/trace.go:171","msg":"trace[1126455199] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"187.400127ms","start":"2024-07-22T00:11:18.757387Z","end":"2024-07-22T00:11:18.944788Z","steps":["trace[1126455199] 'process raft request'  (duration: 187.083262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:11:18.944953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.297146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-332426-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-22T00:11:18.945501Z","caller":"traceutil/trace.go:171","msg":"trace[655306108] range","detail":"{range_begin:/registry/minions/multinode-332426-m03; range_end:; response_count:1; response_revision:623; }","duration":"148.883727ms","start":"2024-07-22T00:11:18.796608Z","end":"2024-07-22T00:11:18.945492Z","steps":["trace[655306108] 'agreement among raft nodes before linearized reading'  (duration: 148.26967ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:11:28.18203Z","caller":"traceutil/trace.go:171","msg":"trace[2143136036] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"216.550447ms","start":"2024-07-22T00:11:27.965451Z","end":"2024-07-22T00:11:28.182001Z","steps":["trace[2143136036] 'process raft request'  (duration: 216.27488ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:14:31.152652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T00:14:31.152799Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-332426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	{"level":"warn","ts":"2024-07-22T00:14:31.152895Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.152918Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.152982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.153041Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T00:14:31.2294Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce564ad586a3115","current-leader-member-id":"ce564ad586a3115"}
	{"level":"info","ts":"2024-07-22T00:14:31.231631Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:14:31.231857Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:14:31.231914Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-332426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> etcd [d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2] <==
	{"level":"info","ts":"2024-07-22T00:16:07.113489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:16:07.113506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:16:07.113771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 switched to configuration voters=(929259593797349653)"}
	{"level":"info","ts":"2024-07-22T00:16:07.113835Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","added-peer-id":"ce564ad586a3115","added-peer-peer-urls":["https://192.168.39.67:2380"]}
	{"level":"info","ts":"2024-07-22T00:16:07.11396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:16:07.114003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:16:07.145014Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T00:16:07.145253Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce564ad586a3115","initial-advertise-peer-urls":["https://192.168.39.67:2380"],"listen-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.67:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:16:07.145291Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:16:07.148582Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:16:07.150349Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:16:08.183774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.18389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.18396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgPreVoteResp from ce564ad586a3115 at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgVoteResp from ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce564ad586a3115 elected leader ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.190181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:16:08.190147Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce564ad586a3115","local-member-attributes":"{Name:multinode-332426 ClientURLs:[https://192.168.39.67:2379]}","request-path":"/0/members/ce564ad586a3115/attributes","cluster-id":"429166af17098d53","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:16:08.191597Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:16:08.191871Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:16:08.191898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:16:08.192504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.67:2379"}
	{"level":"info","ts":"2024-07-22T00:16:08.193621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:17:52 up 8 min,  0 users,  load average: 0.49, 0.32, 0.16
	Linux multinode-332426 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501] <==
	I0722 00:17:11.372192       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:17:21.380220       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:17:21.380272       1 main.go:299] handling current node
	I0722 00:17:21.380290       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:17:21.380297       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:17:21.380507       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:17:21.380540       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:17:31.371693       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:17:31.371740       1 main.go:299] handling current node
	I0722 00:17:31.371774       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:17:31.371780       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:17:31.371972       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:17:31.371993       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.2.0/24] 
	I0722 00:17:41.371996       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:17:41.372074       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.2.0/24] 
	I0722 00:17:41.372220       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:17:41.372240       1 main.go:299] handling current node
	I0722 00:17:41.372264       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:17:41.372269       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:17:51.372735       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:17:51.372869       1 main.go:299] handling current node
	I0722 00:17:51.372954       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:17:51.372967       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:17:51.374005       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:17:51.374032       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7] <==
	I0722 00:13:48.362262       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:13:58.369602       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:13:58.369643       1 main.go:299] handling current node
	I0722 00:13:58.369658       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:13:58.369663       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:13:58.369791       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:13:58.369812       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:08.367262       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:08.367420       1 main.go:299] handling current node
	I0722 00:14:08.367449       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:08.367467       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:08.367628       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:08.367743       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:18.371260       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:18.371344       1 main.go:299] handling current node
	I0722 00:14:18.371362       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:18.371375       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:18.371541       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:18.371561       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:28.369552       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:28.369616       1 main.go:299] handling current node
	I0722 00:14:28.369641       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:28.369646       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:28.369820       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:28.369842       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8] <==
	I0722 00:16:09.465844       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 00:16:09.492056       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 00:16:09.492143       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 00:16:09.492168       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 00:16:09.514275       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 00:16:09.514345       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 00:16:09.514466       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 00:16:09.518514       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 00:16:09.518581       1 aggregator.go:165] initial CRD sync complete...
	I0722 00:16:09.518628       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 00:16:09.518638       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 00:16:09.518644       1 cache.go:39] Caches are synced for autoregister controller
	I0722 00:16:09.520439       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 00:16:09.525891       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 00:16:09.525949       1 policy_source.go:224] refreshing policies
	I0722 00:16:09.543571       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0722 00:16:09.563782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 00:16:10.400456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 00:16:11.312402       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:16:11.442029       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 00:16:11.456613       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:16:11.535686       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 00:16:11.542987       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 00:16:22.323741       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 00:16:22.472999       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7] <==
	W0722 00:14:31.177069       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177099       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177129       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177160       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177213       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177298       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177594       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177696       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177738       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177772       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177805       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177840       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177871       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177906       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177936       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177969       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178444       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178484       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178517       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178544       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178572       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178603       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178629       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.180118       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.180500       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea] <==
	I0722 00:10:02.123693       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0722 00:10:26.552550       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m02\" does not exist"
	I0722 00:10:26.631608       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m02" podCIDRs=["10.244.1.0/24"]
	I0722 00:10:27.128427       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-332426-m02"
	I0722 00:10:45.297632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:10:47.629604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.751926ms"
	I0722 00:10:47.655588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.924467ms"
	I0722 00:10:47.655756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.667µs"
	I0722 00:10:47.655870       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.277µs"
	I0722 00:10:50.772809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.621355ms"
	I0722 00:10:50.773547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.957µs"
	I0722 00:10:51.388112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488601ms"
	I0722 00:10:51.388204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.197µs"
	I0722 00:11:18.948522       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:11:18.950033       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:11:18.976584       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.2.0/24"]
	I0722 00:11:22.147650       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-332426-m03"
	I0722 00:11:38.885791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:06.592553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:07.533702       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:12:07.535172       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:07.556613       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.3.0/24"]
	I0722 00:12:25.978293       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:13:12.254865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.444355ms"
	I0722 00:13:12.254978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.231µs"
	
	
	==> kube-controller-manager [ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55] <==
	I0722 00:16:22.850454       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 00:16:46.594889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.59089ms"
	I0722 00:16:46.595079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.816µs"
	I0722 00:16:46.608580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.749547ms"
	I0722 00:16:46.609036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.005µs"
	I0722 00:16:46.617826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.051µs"
	I0722 00:16:50.910297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m02\" does not exist"
	I0722 00:16:50.927834       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m02" podCIDRs=["10.244.1.0/24"]
	I0722 00:16:52.614523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.818µs"
	I0722 00:16:52.798150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.026µs"
	I0722 00:16:52.808681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.699µs"
	I0722 00:16:52.818577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.65µs"
	I0722 00:16:52.857502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.699µs"
	I0722 00:16:52.864572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.438µs"
	I0722 00:16:52.868892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.29µs"
	I0722 00:17:10.246496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:10.264598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.619µs"
	I0722 00:17:10.280736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.334µs"
	I0722 00:17:14.251651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.877948ms"
	I0722 00:17:14.251955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.046µs"
	I0722 00:17:28.125079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:29.181028       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:17:29.184048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:29.202723       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.2.0/24"]
	I0722 00:17:48.988193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	
	
	==> kube-proxy [45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd] <==
	I0722 00:16:10.658862       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:16:10.699827       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0722 00:16:10.769776       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:16:10.769830       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:16:10.769848       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:16:10.773259       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:16:10.773525       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:16:10.773548       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:16:10.775597       1 config.go:192] "Starting service config controller"
	I0722 00:16:10.775630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:16:10.775662       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:16:10.775676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:16:10.776143       1 config.go:319] "Starting node config controller"
	I0722 00:16:10.776171       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:16:10.876304       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:16:10.876388       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:16:10.876395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421] <==
	I0722 00:09:44.095752       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:09:44.114722       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0722 00:09:44.169548       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:09:44.169596       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:09:44.169637       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:09:44.175618       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:09:44.175814       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:09:44.175826       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:09:44.180130       1 config.go:192] "Starting service config controller"
	I0722 00:09:44.180240       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:09:44.180359       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:09:44.180459       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:09:44.181635       1 config.go:319] "Starting node config controller"
	I0722 00:09:44.181673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:09:44.281202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:09:44.281386       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:09:44.281900       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9] <==
	I0722 00:16:07.559589       1 serving.go:380] Generated self-signed cert in-memory
	W0722 00:16:09.416533       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 00:16:09.416571       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:16:09.416581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 00:16:09.416626       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 00:16:09.512197       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:16:09.512261       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:16:09.516918       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:16:09.516953       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:16:09.517767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:16:09.517849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:16:09.618516       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b] <==
	E0722 00:09:26.584527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:26.584577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:09:26.584627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:09:26.584695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:09:26.584723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:09:26.584762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:09:26.584784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:09:26.584821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:09:26.584842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:09:26.586241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:26.586291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.508868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:27.508909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.638483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:09:27.638660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:09:27.657434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:09:27.657599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:09:27.772745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:27.772981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.777485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:09:27.777612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:09:28.024212       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:09:28.024268       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 00:09:30.676406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 00:14:31.151461       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 00:16:07 multinode-332426 kubelet[3071]: I0722 00:16:07.337956    3071 kubelet_node_status.go:73] "Attempting to register node" node="multinode-332426"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.610727    3071 kubelet_node_status.go:112] "Node was previously registered" node="multinode-332426"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.610832    3071 kubelet_node_status.go:76] "Successfully registered node" node="multinode-332426"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.612682    3071 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.616942    3071 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.814788    3071 apiserver.go:52] "Watching apiserver"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.818661    3071 topology_manager.go:215] "Topology Admit Handler" podUID="c759a961-9e1a-4487-8e22-50b46a782fc1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kgmn4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.818854    3071 topology_manager.go:215] "Topology Admit Handler" podUID="d6945ba2-29c0-406e-aa81-491a78d7f5b6" podNamespace="kube-system" podName="kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.819002    3071 topology_manager.go:215] "Topology Admit Handler" podUID="5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d" podNamespace="kube-system" podName="kube-proxy-lj2fx"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.819091    3071 topology_manager.go:215] "Topology Admit Handler" podUID="eb343e45-5269-4f6d-81cc-ff99ee75d01e" podNamespace="kube-system" podName="storage-provisioner"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.819170    3071 topology_manager.go:215] "Topology Admit Handler" podUID="303001b7-6534-4dcf-8179-14278c447b01" podNamespace="default" podName="busybox-fc5497c4f-d4fqv"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.824840    3071 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.925679    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6945ba2-29c0-406e-aa81-491a78d7f5b6-lib-modules\") pod \"kindnet-8hmt4\" (UID: \"d6945ba2-29c0-406e-aa81-491a78d7f5b6\") " pod="kube-system/kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.926207    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eb343e45-5269-4f6d-81cc-ff99ee75d01e-tmp\") pod \"storage-provisioner\" (UID: \"eb343e45-5269-4f6d-81cc-ff99ee75d01e\") " pod="kube-system/storage-provisioner"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.926364    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d-lib-modules\") pod \"kube-proxy-lj2fx\" (UID: \"5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d\") " pod="kube-system/kube-proxy-lj2fx"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927178    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6945ba2-29c0-406e-aa81-491a78d7f5b6-xtables-lock\") pod \"kindnet-8hmt4\" (UID: \"d6945ba2-29c0-406e-aa81-491a78d7f5b6\") " pod="kube-system/kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927393    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6945ba2-29c0-406e-aa81-491a78d7f5b6-cni-cfg\") pod \"kindnet-8hmt4\" (UID: \"d6945ba2-29c0-406e-aa81-491a78d7f5b6\") " pod="kube-system/kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927479    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d-xtables-lock\") pod \"kube-proxy-lj2fx\" (UID: \"5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d\") " pod="kube-system/kube-proxy-lj2fx"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: E0722 00:16:09.962576    3071 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-332426\" already exists" pod="kube-system/kube-apiserver-multinode-332426"
	Jul 22 00:16:19 multinode-332426 kubelet[3071]: I0722 00:16:19.205603    3071 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 22 00:17:05 multinode-332426 kubelet[3071]: E0722 00:17:05.882259    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:17:51.360499   42741 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-5094/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-332426 -n multinode-332426
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-332426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 stop
E0722 00:17:58.218116   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:19:55.172305   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-332426 stop: exit status 82 (2m0.46331068s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-332426-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-332426 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-332426 status: exit status 3 (18.789786813s)

                                                
                                                
-- stdout --
	multinode-332426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-332426-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:20:14.622895   43411 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0722 00:20:14.622926   43411 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-332426 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-332426 -n multinode-332426
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-332426 logs -n 25: (1.411964674s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426:/home/docker/cp-test_multinode-332426-m02_multinode-332426.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426 sudo cat                                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m02_multinode-332426.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03:/home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426-m03 sudo cat                                   | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp testdata/cp-test.txt                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426:/home/docker/cp-test_multinode-332426-m03_multinode-332426.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426 sudo cat                                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m03_multinode-332426.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt                       | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m02:/home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n                                                                 | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | multinode-332426-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-332426 ssh -n multinode-332426-m02 sudo cat                                   | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	|         | /home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-332426 node stop m03                                                          | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:11 UTC |
	| node    | multinode-332426 node start                                                             | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:11 UTC | 22 Jul 24 00:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-332426                                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:12 UTC |                     |
	| stop    | -p multinode-332426                                                                     | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:12 UTC |                     |
	| start   | -p multinode-332426                                                                     | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:14 UTC | 22 Jul 24 00:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-332426                                                                | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:17 UTC |                     |
	| node    | multinode-332426 node delete                                                            | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:17 UTC | 22 Jul 24 00:17 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-332426 stop                                                                   | multinode-332426 | jenkins | v1.33.1 | 22 Jul 24 00:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:14:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:14:30.349404   41236 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:14:30.349527   41236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:14:30.349537   41236 out.go:304] Setting ErrFile to fd 2...
	I0722 00:14:30.349543   41236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:14:30.349732   41236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:14:30.350251   41236 out.go:298] Setting JSON to false
	I0722 00:14:30.351152   41236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3414,"bootTime":1721603856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:14:30.351207   41236 start.go:139] virtualization: kvm guest
	I0722 00:14:30.353371   41236 out.go:177] * [multinode-332426] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:14:30.354528   41236 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:14:30.354531   41236 notify.go:220] Checking for updates...
	I0722 00:14:30.356633   41236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:14:30.357710   41236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:14:30.358689   41236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:14:30.359791   41236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:14:30.360808   41236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:14:30.362406   41236 config.go:182] Loaded profile config "multinode-332426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:14:30.362530   41236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:14:30.362998   41236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:14:30.363050   41236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:14:30.377754   41236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0722 00:14:30.378267   41236 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:14:30.378940   41236 main.go:141] libmachine: Using API Version  1
	I0722 00:14:30.378963   41236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:14:30.379298   41236 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:14:30.379493   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.414565   41236 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:14:30.415753   41236 start.go:297] selected driver: kvm2
	I0722 00:14:30.415774   41236 start.go:901] validating driver "kvm2" against &{Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:14:30.415887   41236 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:14:30.416229   41236 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:14:30.416289   41236 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:14:30.431491   41236 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:14:30.432145   41236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:14:30.432218   41236 cni.go:84] Creating CNI manager for ""
	I0722 00:14:30.432234   41236 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 00:14:30.432301   41236 start.go:340] cluster config:
	{Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:14:30.432428   41236 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:14:30.434171   41236 out.go:177] * Starting "multinode-332426" primary control-plane node in "multinode-332426" cluster
	I0722 00:14:30.435227   41236 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:14:30.435278   41236 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:14:30.435291   41236 cache.go:56] Caching tarball of preloaded images
	I0722 00:14:30.435365   41236 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:14:30.435378   41236 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:14:30.435506   41236 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/config.json ...
	I0722 00:14:30.435867   41236 start.go:360] acquireMachinesLock for multinode-332426: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:14:30.435946   41236 start.go:364] duration metric: took 48.255µs to acquireMachinesLock for "multinode-332426"
	I0722 00:14:30.435965   41236 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:14:30.435976   41236 fix.go:54] fixHost starting: 
	I0722 00:14:30.436226   41236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:14:30.436264   41236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:14:30.450293   41236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0722 00:14:30.450700   41236 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:14:30.451101   41236 main.go:141] libmachine: Using API Version  1
	I0722 00:14:30.451124   41236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:14:30.451496   41236 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:14:30.451713   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.451851   41236 main.go:141] libmachine: (multinode-332426) Calling .GetState
	I0722 00:14:30.453619   41236 fix.go:112] recreateIfNeeded on multinode-332426: state=Running err=<nil>
	W0722 00:14:30.453641   41236 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:14:30.455290   41236 out.go:177] * Updating the running kvm2 "multinode-332426" VM ...
	I0722 00:14:30.456363   41236 machine.go:94] provisionDockerMachine start ...
	I0722 00:14:30.456381   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:14:30.456562   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.459373   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.459779   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.459801   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.459910   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.460059   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.460227   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.460372   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.460520   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.460713   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.460723   41236 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:14:30.563966   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-332426
	
	I0722 00:14:30.563991   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.564209   41236 buildroot.go:166] provisioning hostname "multinode-332426"
	I0722 00:14:30.564238   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.564467   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.567096   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.567513   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.567540   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.567700   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.567882   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.568106   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.568252   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.568422   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.568596   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.568613   41236 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-332426 && echo "multinode-332426" | sudo tee /etc/hostname
	I0722 00:14:30.686711   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-332426
	
	I0722 00:14:30.686736   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.689564   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.689974   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.690011   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.690126   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.690329   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.690526   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.690687   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.690865   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:30.691110   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:30.691132   41236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-332426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-332426/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-332426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:14:30.791259   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:14:30.791291   41236 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:14:30.791313   41236 buildroot.go:174] setting up certificates
	I0722 00:14:30.791324   41236 provision.go:84] configureAuth start
	I0722 00:14:30.791360   41236 main.go:141] libmachine: (multinode-332426) Calling .GetMachineName
	I0722 00:14:30.791622   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:14:30.794191   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.794647   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.794676   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.794823   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.797116   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.797407   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.797439   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.797634   41236 provision.go:143] copyHostCerts
	I0722 00:14:30.797669   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:14:30.797701   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:14:30.797721   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:14:30.797786   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:14:30.797861   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:14:30.797877   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:14:30.797883   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:14:30.797907   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:14:30.797944   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:14:30.797959   41236 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:14:30.797965   41236 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:14:30.797984   41236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:14:30.798023   41236 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.multinode-332426 san=[127.0.0.1 192.168.39.67 localhost minikube multinode-332426]
	I0722 00:14:30.873166   41236 provision.go:177] copyRemoteCerts
	I0722 00:14:30.873220   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:14:30.873242   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:30.876170   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.876577   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:30.876600   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:30.876770   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:30.876932   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:30.877091   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:30.877255   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:14:30.961184   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 00:14:30.961275   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:14:30.991405   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 00:14:30.991489   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 00:14:31.013536   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 00:14:31.013599   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:14:31.036527   41236 provision.go:87] duration metric: took 245.190005ms to configureAuth
	I0722 00:14:31.036550   41236 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:14:31.036786   41236 config.go:182] Loaded profile config "multinode-332426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:14:31.036866   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:14:31.039488   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:31.039834   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:14:31.039862   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:14:31.039959   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:14:31.040146   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:31.040305   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:14:31.040438   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:14:31.040564   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:14:31.040722   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:14:31.040734   41236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:16:01.783677   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:16:01.783703   41236 machine.go:97] duration metric: took 1m31.327328851s to provisionDockerMachine
	I0722 00:16:01.783715   41236 start.go:293] postStartSetup for "multinode-332426" (driver="kvm2")
	I0722 00:16:01.783724   41236 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:16:01.783757   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:01.784043   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:16:01.784139   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:01.787314   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.787744   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:01.787768   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.787966   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:01.788154   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.788315   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:01.788468   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:01.869758   41236 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:16:01.873562   41236 command_runner.go:130] > NAME=Buildroot
	I0722 00:16:01.873584   41236 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 00:16:01.873590   41236 command_runner.go:130] > ID=buildroot
	I0722 00:16:01.873598   41236 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 00:16:01.873605   41236 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 00:16:01.873910   41236 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:16:01.873928   41236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:16:01.873979   41236 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:16:01.874042   41236 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:16:01.874052   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /etc/ssl/certs/122632.pem
	I0722 00:16:01.874135   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:16:01.883013   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:16:01.904725   41236 start.go:296] duration metric: took 120.995763ms for postStartSetup
	I0722 00:16:01.904768   41236 fix.go:56] duration metric: took 1m31.468793708s for fixHost
	I0722 00:16:01.904788   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:01.907462   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.907810   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:01.907832   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:01.908038   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:01.908232   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.908411   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:01.908554   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:01.908734   41236 main.go:141] libmachine: Using SSH client type: native
	I0722 00:16:01.908911   41236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0722 00:16:01.908920   41236 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:16:02.006917   41236 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721607361.981476151
	
	I0722 00:16:02.006943   41236 fix.go:216] guest clock: 1721607361.981476151
	I0722 00:16:02.006956   41236 fix.go:229] Guest: 2024-07-22 00:16:01.981476151 +0000 UTC Remote: 2024-07-22 00:16:01.904772468 +0000 UTC m=+91.589726844 (delta=76.703683ms)
	I0722 00:16:02.006989   41236 fix.go:200] guest clock delta is within tolerance: 76.703683ms
	I0722 00:16:02.006996   41236 start.go:83] releasing machines lock for "multinode-332426", held for 1m31.57104089s
	I0722 00:16:02.007016   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.007321   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:16:02.009950   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.010401   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.010431   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.010568   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011102   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011272   41236 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:16:02.011363   41236 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:16:02.011410   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:02.011514   41236 ssh_runner.go:195] Run: cat /version.json
	I0722 00:16:02.011543   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:16:02.013987   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014319   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014358   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.014381   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014554   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:02.014718   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:02.014804   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:02.014829   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:02.014856   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:02.015001   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:16:02.015006   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:02.015145   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:16:02.015304   41236 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:16:02.015454   41236 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:16:02.087427   41236 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 00:16:02.087894   41236 ssh_runner.go:195] Run: systemctl --version
	I0722 00:16:02.121986   41236 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0722 00:16:02.122570   41236 command_runner.go:130] > systemd 252 (252)
	I0722 00:16:02.122640   41236 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 00:16:02.122708   41236 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:16:02.280490   41236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:16:02.285880   41236 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 00:16:02.286009   41236 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:16:02.286064   41236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:16:02.295352   41236 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 00:16:02.295375   41236 start.go:495] detecting cgroup driver to use...
	I0722 00:16:02.295428   41236 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:16:02.312826   41236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:16:02.326996   41236 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:16:02.327060   41236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:16:02.340498   41236 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:16:02.353907   41236 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:16:02.512764   41236 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:16:02.660935   41236 docker.go:233] disabling docker service ...
	I0722 00:16:02.661008   41236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:16:02.677837   41236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:16:02.691085   41236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:16:02.822227   41236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:16:02.955461   41236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:16:02.969111   41236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:16:02.986766   41236 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0722 00:16:02.987109   41236 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:16:02.987156   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:02.996668   41236 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:16:02.996729   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.006439   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.015691   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.024951   41236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:16:03.034675   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.044467   41236 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.054931   41236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:16:03.064495   41236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:16:03.073668   41236 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 00:16:03.073787   41236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:16:03.083556   41236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:16:03.213843   41236 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:16:03.437195   41236 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:16:03.437257   41236 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:16:03.441617   41236 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0722 00:16:03.441640   41236 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 00:16:03.441646   41236 command_runner.go:130] > Device: 0,22	Inode: 1364        Links: 1
	I0722 00:16:03.441652   41236 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 00:16:03.441657   41236 command_runner.go:130] > Access: 2024-07-22 00:16:03.371742621 +0000
	I0722 00:16:03.441666   41236 command_runner.go:130] > Modify: 2024-07-22 00:16:03.318741047 +0000
	I0722 00:16:03.441673   41236 command_runner.go:130] > Change: 2024-07-22 00:16:03.318741047 +0000
	I0722 00:16:03.441679   41236 command_runner.go:130] >  Birth: -
	I0722 00:16:03.441694   41236 start.go:563] Will wait 60s for crictl version
	I0722 00:16:03.441750   41236 ssh_runner.go:195] Run: which crictl
	I0722 00:16:03.445208   41236 command_runner.go:130] > /usr/bin/crictl
	I0722 00:16:03.445270   41236 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:16:03.478857   41236 command_runner.go:130] > Version:  0.1.0
	I0722 00:16:03.478880   41236 command_runner.go:130] > RuntimeName:  cri-o
	I0722 00:16:03.478885   41236 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0722 00:16:03.478900   41236 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 00:16:03.480043   41236 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:16:03.480105   41236 ssh_runner.go:195] Run: crio --version
	I0722 00:16:03.507057   41236 command_runner.go:130] > crio version 1.29.1
	I0722 00:16:03.507077   41236 command_runner.go:130] > Version:        1.29.1
	I0722 00:16:03.507085   41236 command_runner.go:130] > GitCommit:      unknown
	I0722 00:16:03.507090   41236 command_runner.go:130] > GitCommitDate:  unknown
	I0722 00:16:03.507095   41236 command_runner.go:130] > GitTreeState:   clean
	I0722 00:16:03.507103   41236 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 00:16:03.507118   41236 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 00:16:03.507124   41236 command_runner.go:130] > Compiler:       gc
	I0722 00:16:03.507130   41236 command_runner.go:130] > Platform:       linux/amd64
	I0722 00:16:03.507135   41236 command_runner.go:130] > Linkmode:       dynamic
	I0722 00:16:03.507142   41236 command_runner.go:130] > BuildTags:      
	I0722 00:16:03.507149   41236 command_runner.go:130] >   containers_image_ostree_stub
	I0722 00:16:03.507157   41236 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 00:16:03.507166   41236 command_runner.go:130] >   btrfs_noversion
	I0722 00:16:03.507174   41236 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 00:16:03.507182   41236 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 00:16:03.507191   41236 command_runner.go:130] >   seccomp
	I0722 00:16:03.507199   41236 command_runner.go:130] > LDFlags:          unknown
	I0722 00:16:03.507206   41236 command_runner.go:130] > SeccompEnabled:   true
	I0722 00:16:03.507214   41236 command_runner.go:130] > AppArmorEnabled:  false
	I0722 00:16:03.507287   41236 ssh_runner.go:195] Run: crio --version
	I0722 00:16:03.532647   41236 command_runner.go:130] > crio version 1.29.1
	I0722 00:16:03.532669   41236 command_runner.go:130] > Version:        1.29.1
	I0722 00:16:03.532675   41236 command_runner.go:130] > GitCommit:      unknown
	I0722 00:16:03.532679   41236 command_runner.go:130] > GitCommitDate:  unknown
	I0722 00:16:03.532683   41236 command_runner.go:130] > GitTreeState:   clean
	I0722 00:16:03.532688   41236 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0722 00:16:03.532692   41236 command_runner.go:130] > GoVersion:      go1.21.6
	I0722 00:16:03.532696   41236 command_runner.go:130] > Compiler:       gc
	I0722 00:16:03.532701   41236 command_runner.go:130] > Platform:       linux/amd64
	I0722 00:16:03.532705   41236 command_runner.go:130] > Linkmode:       dynamic
	I0722 00:16:03.532709   41236 command_runner.go:130] > BuildTags:      
	I0722 00:16:03.532712   41236 command_runner.go:130] >   containers_image_ostree_stub
	I0722 00:16:03.532717   41236 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0722 00:16:03.532721   41236 command_runner.go:130] >   btrfs_noversion
	I0722 00:16:03.532726   41236 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0722 00:16:03.532730   41236 command_runner.go:130] >   libdm_no_deferred_remove
	I0722 00:16:03.532736   41236 command_runner.go:130] >   seccomp
	I0722 00:16:03.532740   41236 command_runner.go:130] > LDFlags:          unknown
	I0722 00:16:03.532745   41236 command_runner.go:130] > SeccompEnabled:   true
	I0722 00:16:03.532748   41236 command_runner.go:130] > AppArmorEnabled:  false
	I0722 00:16:03.535750   41236 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:16:03.536992   41236 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:16:03.539700   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:03.540069   41236 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:16:03.540096   41236 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:16:03.540282   41236 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:16:03.544116   41236 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0722 00:16:03.544316   41236 kubeadm.go:883] updating cluster {Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:16:03.544456   41236 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:16:03.544500   41236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:16:03.584309   41236 command_runner.go:130] > {
	I0722 00:16:03.584334   41236 command_runner.go:130] >   "images": [
	I0722 00:16:03.584338   41236 command_runner.go:130] >     {
	I0722 00:16:03.584346   41236 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 00:16:03.584351   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584356   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 00:16:03.584360   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584364   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584378   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 00:16:03.584390   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 00:16:03.584397   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584405   41236 command_runner.go:130] >       "size": "87165492",
	I0722 00:16:03.584413   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584420   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584434   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584440   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584444   41236 command_runner.go:130] >     },
	I0722 00:16:03.584449   41236 command_runner.go:130] >     {
	I0722 00:16:03.584454   41236 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 00:16:03.584459   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584465   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 00:16:03.584472   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584478   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584491   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 00:16:03.584502   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 00:16:03.584511   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584524   41236 command_runner.go:130] >       "size": "87174707",
	I0722 00:16:03.584537   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584553   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584562   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584571   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584590   41236 command_runner.go:130] >     },
	I0722 00:16:03.584596   41236 command_runner.go:130] >     {
	I0722 00:16:03.584606   41236 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 00:16:03.584616   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584624   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 00:16:03.584630   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584635   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584649   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 00:16:03.584665   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 00:16:03.584674   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584681   41236 command_runner.go:130] >       "size": "1363676",
	I0722 00:16:03.584691   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584697   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584706   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584712   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584718   41236 command_runner.go:130] >     },
	I0722 00:16:03.584721   41236 command_runner.go:130] >     {
	I0722 00:16:03.584731   41236 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 00:16:03.584740   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584750   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 00:16:03.584759   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584768   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584782   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 00:16:03.584802   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 00:16:03.584810   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584818   41236 command_runner.go:130] >       "size": "31470524",
	I0722 00:16:03.584828   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584834   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.584842   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584849   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584857   41236 command_runner.go:130] >     },
	I0722 00:16:03.584868   41236 command_runner.go:130] >     {
	I0722 00:16:03.584880   41236 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 00:16:03.584886   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.584893   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 00:16:03.584898   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584905   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.584921   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 00:16:03.584936   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 00:16:03.584944   41236 command_runner.go:130] >       ],
	I0722 00:16:03.584951   41236 command_runner.go:130] >       "size": "61245718",
	I0722 00:16:03.584960   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.584968   41236 command_runner.go:130] >       "username": "nonroot",
	I0722 00:16:03.584972   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.584979   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.584984   41236 command_runner.go:130] >     },
	I0722 00:16:03.584989   41236 command_runner.go:130] >     {
	I0722 00:16:03.585000   41236 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 00:16:03.585009   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585017   41236 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 00:16:03.585026   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585032   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585046   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 00:16:03.585056   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 00:16:03.585062   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585069   41236 command_runner.go:130] >       "size": "150779692",
	I0722 00:16:03.585078   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585085   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585091   41236 command_runner.go:130] >       },
	I0722 00:16:03.585098   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585107   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585113   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585121   41236 command_runner.go:130] >     },
	I0722 00:16:03.585126   41236 command_runner.go:130] >     {
	I0722 00:16:03.585137   41236 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 00:16:03.585144   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585150   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 00:16:03.585166   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585176   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585187   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 00:16:03.585202   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 00:16:03.585210   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585216   41236 command_runner.go:130] >       "size": "117609954",
	I0722 00:16:03.585224   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585228   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585241   41236 command_runner.go:130] >       },
	I0722 00:16:03.585250   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585260   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585268   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585277   41236 command_runner.go:130] >     },
	I0722 00:16:03.585282   41236 command_runner.go:130] >     {
	I0722 00:16:03.585292   41236 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 00:16:03.585301   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585309   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 00:16:03.585315   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585320   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585353   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 00:16:03.585370   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 00:16:03.585376   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585382   41236 command_runner.go:130] >       "size": "112198984",
	I0722 00:16:03.585390   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585397   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585404   41236 command_runner.go:130] >       },
	I0722 00:16:03.585409   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585416   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585423   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585434   41236 command_runner.go:130] >     },
	I0722 00:16:03.585440   41236 command_runner.go:130] >     {
	I0722 00:16:03.585450   41236 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 00:16:03.585456   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585463   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 00:16:03.585468   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585473   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585487   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 00:16:03.585501   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 00:16:03.585508   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585515   41236 command_runner.go:130] >       "size": "85953945",
	I0722 00:16:03.585525   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.585531   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585540   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585550   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585555   41236 command_runner.go:130] >     },
	I0722 00:16:03.585561   41236 command_runner.go:130] >     {
	I0722 00:16:03.585572   41236 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 00:16:03.585586   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585593   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 00:16:03.585602   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585608   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585624   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 00:16:03.585637   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 00:16:03.585645   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585652   41236 command_runner.go:130] >       "size": "63051080",
	I0722 00:16:03.585659   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585666   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.585674   41236 command_runner.go:130] >       },
	I0722 00:16:03.585680   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585689   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585695   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.585704   41236 command_runner.go:130] >     },
	I0722 00:16:03.585709   41236 command_runner.go:130] >     {
	I0722 00:16:03.585721   41236 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 00:16:03.585731   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.585739   41236 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 00:16:03.585747   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585754   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.585767   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 00:16:03.585780   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 00:16:03.585788   41236 command_runner.go:130] >       ],
	I0722 00:16:03.585794   41236 command_runner.go:130] >       "size": "750414",
	I0722 00:16:03.585807   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.585817   41236 command_runner.go:130] >         "value": "65535"
	I0722 00:16:03.585822   41236 command_runner.go:130] >       },
	I0722 00:16:03.585831   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.585838   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.585847   41236 command_runner.go:130] >       "pinned": true
	I0722 00:16:03.585852   41236 command_runner.go:130] >     }
	I0722 00:16:03.585859   41236 command_runner.go:130] >   ]
	I0722 00:16:03.585862   41236 command_runner.go:130] > }
	I0722 00:16:03.586072   41236 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:16:03.586084   41236 crio.go:433] Images already preloaded, skipping extraction
	I0722 00:16:03.586138   41236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:16:03.617866   41236 command_runner.go:130] > {
	I0722 00:16:03.617890   41236 command_runner.go:130] >   "images": [
	I0722 00:16:03.617895   41236 command_runner.go:130] >     {
	I0722 00:16:03.617903   41236 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0722 00:16:03.617908   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.617923   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0722 00:16:03.617929   41236 command_runner.go:130] >       ],
	I0722 00:16:03.617936   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.617952   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0722 00:16:03.617965   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0722 00:16:03.617970   41236 command_runner.go:130] >       ],
	I0722 00:16:03.617974   41236 command_runner.go:130] >       "size": "87165492",
	I0722 00:16:03.617979   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.617988   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.617999   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618003   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618007   41236 command_runner.go:130] >     },
	I0722 00:16:03.618010   41236 command_runner.go:130] >     {
	I0722 00:16:03.618020   41236 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0722 00:16:03.618029   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618037   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0722 00:16:03.618046   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618053   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618065   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0722 00:16:03.618075   41236 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0722 00:16:03.618082   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618087   41236 command_runner.go:130] >       "size": "87174707",
	I0722 00:16:03.618092   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618101   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618110   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618119   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618132   41236 command_runner.go:130] >     },
	I0722 00:16:03.618139   41236 command_runner.go:130] >     {
	I0722 00:16:03.618150   41236 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0722 00:16:03.618159   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618169   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0722 00:16:03.618175   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618179   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618189   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0722 00:16:03.618204   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0722 00:16:03.618213   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618220   41236 command_runner.go:130] >       "size": "1363676",
	I0722 00:16:03.618230   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618240   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618252   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618262   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618270   41236 command_runner.go:130] >     },
	I0722 00:16:03.618278   41236 command_runner.go:130] >     {
	I0722 00:16:03.618285   41236 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0722 00:16:03.618300   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618311   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0722 00:16:03.618320   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618329   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618344   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0722 00:16:03.618366   41236 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0722 00:16:03.618374   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618377   41236 command_runner.go:130] >       "size": "31470524",
	I0722 00:16:03.618386   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618396   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618406   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618414   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618422   41236 command_runner.go:130] >     },
	I0722 00:16:03.618428   41236 command_runner.go:130] >     {
	I0722 00:16:03.618441   41236 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0722 00:16:03.618456   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618464   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0722 00:16:03.618471   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618481   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618495   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0722 00:16:03.618510   41236 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0722 00:16:03.618526   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618536   41236 command_runner.go:130] >       "size": "61245718",
	I0722 00:16:03.618543   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.618547   41236 command_runner.go:130] >       "username": "nonroot",
	I0722 00:16:03.618555   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618565   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618585   41236 command_runner.go:130] >     },
	I0722 00:16:03.618594   41236 command_runner.go:130] >     {
	I0722 00:16:03.618614   41236 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0722 00:16:03.618623   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618631   41236 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0722 00:16:03.618640   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618649   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618663   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0722 00:16:03.618676   41236 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0722 00:16:03.618690   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618698   41236 command_runner.go:130] >       "size": "150779692",
	I0722 00:16:03.618701   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.618708   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.618719   41236 command_runner.go:130] >       },
	I0722 00:16:03.618729   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618735   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618745   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618753   41236 command_runner.go:130] >     },
	I0722 00:16:03.618759   41236 command_runner.go:130] >     {
	I0722 00:16:03.618771   41236 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0722 00:16:03.618780   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618791   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0722 00:16:03.618799   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618803   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618815   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0722 00:16:03.618830   41236 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0722 00:16:03.618839   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618849   41236 command_runner.go:130] >       "size": "117609954",
	I0722 00:16:03.618857   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.618867   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.618875   41236 command_runner.go:130] >       },
	I0722 00:16:03.618884   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.618892   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.618899   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.618902   41236 command_runner.go:130] >     },
	I0722 00:16:03.618910   41236 command_runner.go:130] >     {
	I0722 00:16:03.618923   41236 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0722 00:16:03.618933   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.618945   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0722 00:16:03.618954   41236 command_runner.go:130] >       ],
	I0722 00:16:03.618963   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.618992   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0722 00:16:03.619005   41236 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0722 00:16:03.619010   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619020   41236 command_runner.go:130] >       "size": "112198984",
	I0722 00:16:03.619038   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619048   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.619056   41236 command_runner.go:130] >       },
	I0722 00:16:03.619066   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619075   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619084   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619090   41236 command_runner.go:130] >     },
	I0722 00:16:03.619094   41236 command_runner.go:130] >     {
	I0722 00:16:03.619105   41236 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0722 00:16:03.619115   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619125   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0722 00:16:03.619134   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619143   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619157   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0722 00:16:03.619173   41236 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0722 00:16:03.619179   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619184   41236 command_runner.go:130] >       "size": "85953945",
	I0722 00:16:03.619193   41236 command_runner.go:130] >       "uid": null,
	I0722 00:16:03.619201   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619207   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619216   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619225   41236 command_runner.go:130] >     },
	I0722 00:16:03.619233   41236 command_runner.go:130] >     {
	I0722 00:16:03.619246   41236 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0722 00:16:03.619256   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619267   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0722 00:16:03.619273   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619277   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619291   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0722 00:16:03.619306   41236 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0722 00:16:03.619315   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619324   41236 command_runner.go:130] >       "size": "63051080",
	I0722 00:16:03.619332   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619341   41236 command_runner.go:130] >         "value": "0"
	I0722 00:16:03.619349   41236 command_runner.go:130] >       },
	I0722 00:16:03.619358   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619372   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619379   41236 command_runner.go:130] >       "pinned": false
	I0722 00:16:03.619383   41236 command_runner.go:130] >     },
	I0722 00:16:03.619391   41236 command_runner.go:130] >     {
	I0722 00:16:03.619402   41236 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0722 00:16:03.619412   41236 command_runner.go:130] >       "repoTags": [
	I0722 00:16:03.619419   41236 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0722 00:16:03.619427   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619433   41236 command_runner.go:130] >       "repoDigests": [
	I0722 00:16:03.619445   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0722 00:16:03.619457   41236 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0722 00:16:03.619464   41236 command_runner.go:130] >       ],
	I0722 00:16:03.619470   41236 command_runner.go:130] >       "size": "750414",
	I0722 00:16:03.619478   41236 command_runner.go:130] >       "uid": {
	I0722 00:16:03.619483   41236 command_runner.go:130] >         "value": "65535"
	I0722 00:16:03.619490   41236 command_runner.go:130] >       },
	I0722 00:16:03.619496   41236 command_runner.go:130] >       "username": "",
	I0722 00:16:03.619505   41236 command_runner.go:130] >       "spec": null,
	I0722 00:16:03.619510   41236 command_runner.go:130] >       "pinned": true
	I0722 00:16:03.619518   41236 command_runner.go:130] >     }
	I0722 00:16:03.619522   41236 command_runner.go:130] >   ]
	I0722 00:16:03.619529   41236 command_runner.go:130] > }
	I0722 00:16:03.619685   41236 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:16:03.619697   41236 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:16:03.619704   41236 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0722 00:16:03.619801   41236 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-332426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:16:03.619867   41236 ssh_runner.go:195] Run: crio config
	I0722 00:16:03.654561   41236 command_runner.go:130] ! time="2024-07-22 00:16:03.629007454Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0722 00:16:03.660435   41236 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0722 00:16:03.666731   41236 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0722 00:16:03.666759   41236 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0722 00:16:03.666765   41236 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0722 00:16:03.666769   41236 command_runner.go:130] > #
	I0722 00:16:03.666775   41236 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0722 00:16:03.666781   41236 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0722 00:16:03.666788   41236 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0722 00:16:03.666794   41236 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0722 00:16:03.666802   41236 command_runner.go:130] > # reload'.
	I0722 00:16:03.666811   41236 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0722 00:16:03.666820   41236 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0722 00:16:03.666831   41236 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0722 00:16:03.666839   41236 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0722 00:16:03.666844   41236 command_runner.go:130] > [crio]
	I0722 00:16:03.666853   41236 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0722 00:16:03.666864   41236 command_runner.go:130] > # containers images, in this directory.
	I0722 00:16:03.666871   41236 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0722 00:16:03.666885   41236 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0722 00:16:03.666893   41236 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0722 00:16:03.666903   41236 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0722 00:16:03.666912   41236 command_runner.go:130] > # imagestore = ""
	I0722 00:16:03.666922   41236 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0722 00:16:03.666932   41236 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0722 00:16:03.666941   41236 command_runner.go:130] > storage_driver = "overlay"
	I0722 00:16:03.666949   41236 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0722 00:16:03.666961   41236 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0722 00:16:03.666969   41236 command_runner.go:130] > storage_option = [
	I0722 00:16:03.666974   41236 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0722 00:16:03.666977   41236 command_runner.go:130] > ]
	I0722 00:16:03.666983   41236 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0722 00:16:03.666990   41236 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0722 00:16:03.666994   41236 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0722 00:16:03.667014   41236 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0722 00:16:03.667022   41236 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0722 00:16:03.667026   41236 command_runner.go:130] > # always happen on a node reboot
	I0722 00:16:03.667031   41236 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0722 00:16:03.667045   41236 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0722 00:16:03.667053   41236 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0722 00:16:03.667057   41236 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0722 00:16:03.667065   41236 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0722 00:16:03.667072   41236 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0722 00:16:03.667081   41236 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0722 00:16:03.667087   41236 command_runner.go:130] > # internal_wipe = true
	I0722 00:16:03.667094   41236 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0722 00:16:03.667101   41236 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0722 00:16:03.667105   41236 command_runner.go:130] > # internal_repair = false
	I0722 00:16:03.667113   41236 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0722 00:16:03.667119   41236 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0722 00:16:03.667126   41236 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0722 00:16:03.667131   41236 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0722 00:16:03.667138   41236 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0722 00:16:03.667142   41236 command_runner.go:130] > [crio.api]
	I0722 00:16:03.667149   41236 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0722 00:16:03.667154   41236 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0722 00:16:03.667161   41236 command_runner.go:130] > # IP address on which the stream server will listen.
	I0722 00:16:03.667165   41236 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0722 00:16:03.667172   41236 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0722 00:16:03.667178   41236 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0722 00:16:03.667182   41236 command_runner.go:130] > # stream_port = "0"
	I0722 00:16:03.667189   41236 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0722 00:16:03.667193   41236 command_runner.go:130] > # stream_enable_tls = false
	I0722 00:16:03.667201   41236 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0722 00:16:03.667205   41236 command_runner.go:130] > # stream_idle_timeout = ""
	I0722 00:16:03.667215   41236 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0722 00:16:03.667223   41236 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0722 00:16:03.667227   41236 command_runner.go:130] > # minutes.
	I0722 00:16:03.667233   41236 command_runner.go:130] > # stream_tls_cert = ""
	I0722 00:16:03.667238   41236 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0722 00:16:03.667258   41236 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0722 00:16:03.667265   41236 command_runner.go:130] > # stream_tls_key = ""
	I0722 00:16:03.667270   41236 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0722 00:16:03.667278   41236 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0722 00:16:03.667297   41236 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0722 00:16:03.667304   41236 command_runner.go:130] > # stream_tls_ca = ""
	I0722 00:16:03.667311   41236 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 00:16:03.667315   41236 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0722 00:16:03.667322   41236 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0722 00:16:03.667333   41236 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0722 00:16:03.667338   41236 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0722 00:16:03.667346   41236 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0722 00:16:03.667349   41236 command_runner.go:130] > [crio.runtime]
	I0722 00:16:03.667356   41236 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0722 00:16:03.667362   41236 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0722 00:16:03.667368   41236 command_runner.go:130] > # "nofile=1024:2048"
	I0722 00:16:03.667373   41236 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0722 00:16:03.667379   41236 command_runner.go:130] > # default_ulimits = [
	I0722 00:16:03.667382   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667388   41236 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0722 00:16:03.667394   41236 command_runner.go:130] > # no_pivot = false
	I0722 00:16:03.667401   41236 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0722 00:16:03.667410   41236 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0722 00:16:03.667414   41236 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0722 00:16:03.667422   41236 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0722 00:16:03.667426   41236 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0722 00:16:03.667436   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 00:16:03.667442   41236 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0722 00:16:03.667446   41236 command_runner.go:130] > # Cgroup setting for conmon
	I0722 00:16:03.667454   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0722 00:16:03.667458   41236 command_runner.go:130] > conmon_cgroup = "pod"
	I0722 00:16:03.667464   41236 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0722 00:16:03.667471   41236 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0722 00:16:03.667482   41236 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0722 00:16:03.667492   41236 command_runner.go:130] > conmon_env = [
	I0722 00:16:03.667500   41236 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 00:16:03.667514   41236 command_runner.go:130] > ]
	I0722 00:16:03.667526   41236 command_runner.go:130] > # Additional environment variables to set for all the
	I0722 00:16:03.667538   41236 command_runner.go:130] > # containers. These are overridden if set in the
	I0722 00:16:03.667550   41236 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0722 00:16:03.667559   41236 command_runner.go:130] > # default_env = [
	I0722 00:16:03.667566   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667572   41236 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0722 00:16:03.667583   41236 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0722 00:16:03.667589   41236 command_runner.go:130] > # selinux = false
	I0722 00:16:03.667595   41236 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0722 00:16:03.667602   41236 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0722 00:16:03.667608   41236 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0722 00:16:03.667614   41236 command_runner.go:130] > # seccomp_profile = ""
	I0722 00:16:03.667619   41236 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0722 00:16:03.667626   41236 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0722 00:16:03.667632   41236 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0722 00:16:03.667639   41236 command_runner.go:130] > # which might increase security.
	I0722 00:16:03.667643   41236 command_runner.go:130] > # This option is currently deprecated,
	I0722 00:16:03.667651   41236 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0722 00:16:03.667655   41236 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0722 00:16:03.667664   41236 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0722 00:16:03.667670   41236 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0722 00:16:03.667676   41236 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0722 00:16:03.667683   41236 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0722 00:16:03.667693   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.667700   41236 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0722 00:16:03.667705   41236 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0722 00:16:03.667711   41236 command_runner.go:130] > # the cgroup blockio controller.
	I0722 00:16:03.667715   41236 command_runner.go:130] > # blockio_config_file = ""
	I0722 00:16:03.667723   41236 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0722 00:16:03.667727   41236 command_runner.go:130] > # blockio parameters.
	I0722 00:16:03.667733   41236 command_runner.go:130] > # blockio_reload = false
	I0722 00:16:03.667739   41236 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0722 00:16:03.667745   41236 command_runner.go:130] > # irqbalance daemon.
	I0722 00:16:03.667749   41236 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0722 00:16:03.667757   41236 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0722 00:16:03.667771   41236 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0722 00:16:03.667779   41236 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0722 00:16:03.667785   41236 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0722 00:16:03.667793   41236 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0722 00:16:03.667797   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.667801   41236 command_runner.go:130] > # rdt_config_file = ""
	I0722 00:16:03.667806   41236 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0722 00:16:03.667811   41236 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0722 00:16:03.667839   41236 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0722 00:16:03.667846   41236 command_runner.go:130] > # separate_pull_cgroup = ""
	I0722 00:16:03.667851   41236 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0722 00:16:03.667857   41236 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0722 00:16:03.667860   41236 command_runner.go:130] > # will be added.
	I0722 00:16:03.667865   41236 command_runner.go:130] > # default_capabilities = [
	I0722 00:16:03.667868   41236 command_runner.go:130] > # 	"CHOWN",
	I0722 00:16:03.667872   41236 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0722 00:16:03.667875   41236 command_runner.go:130] > # 	"FSETID",
	I0722 00:16:03.667879   41236 command_runner.go:130] > # 	"FOWNER",
	I0722 00:16:03.667882   41236 command_runner.go:130] > # 	"SETGID",
	I0722 00:16:03.667887   41236 command_runner.go:130] > # 	"SETUID",
	I0722 00:16:03.667891   41236 command_runner.go:130] > # 	"SETPCAP",
	I0722 00:16:03.667898   41236 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0722 00:16:03.667901   41236 command_runner.go:130] > # 	"KILL",
	I0722 00:16:03.667906   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667913   41236 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0722 00:16:03.667920   41236 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0722 00:16:03.667925   41236 command_runner.go:130] > # add_inheritable_capabilities = false
	I0722 00:16:03.667931   41236 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0722 00:16:03.667937   41236 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 00:16:03.667942   41236 command_runner.go:130] > default_sysctls = [
	I0722 00:16:03.667946   41236 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0722 00:16:03.667949   41236 command_runner.go:130] > ]
	I0722 00:16:03.667953   41236 command_runner.go:130] > # List of devices on the host that a
	I0722 00:16:03.667962   41236 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0722 00:16:03.667965   41236 command_runner.go:130] > # allowed_devices = [
	I0722 00:16:03.667968   41236 command_runner.go:130] > # 	"/dev/fuse",
	I0722 00:16:03.667976   41236 command_runner.go:130] > # ]
	I0722 00:16:03.667983   41236 command_runner.go:130] > # List of additional devices. specified as
	I0722 00:16:03.667990   41236 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0722 00:16:03.667997   41236 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0722 00:16:03.668004   41236 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0722 00:16:03.668010   41236 command_runner.go:130] > # additional_devices = [
	I0722 00:16:03.668013   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668018   41236 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0722 00:16:03.668024   41236 command_runner.go:130] > # cdi_spec_dirs = [
	I0722 00:16:03.668027   41236 command_runner.go:130] > # 	"/etc/cdi",
	I0722 00:16:03.668031   41236 command_runner.go:130] > # 	"/var/run/cdi",
	I0722 00:16:03.668034   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668039   41236 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0722 00:16:03.668047   41236 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0722 00:16:03.668051   41236 command_runner.go:130] > # Defaults to false.
	I0722 00:16:03.668058   41236 command_runner.go:130] > # device_ownership_from_security_context = false
	I0722 00:16:03.668069   41236 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0722 00:16:03.668077   41236 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0722 00:16:03.668080   41236 command_runner.go:130] > # hooks_dir = [
	I0722 00:16:03.668085   41236 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0722 00:16:03.668091   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668096   41236 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0722 00:16:03.668102   41236 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0722 00:16:03.668108   41236 command_runner.go:130] > # its default mounts from the following two files:
	I0722 00:16:03.668111   41236 command_runner.go:130] > #
	I0722 00:16:03.668118   41236 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0722 00:16:03.668126   41236 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0722 00:16:03.668131   41236 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0722 00:16:03.668136   41236 command_runner.go:130] > #
	I0722 00:16:03.668141   41236 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0722 00:16:03.668150   41236 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0722 00:16:03.668156   41236 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0722 00:16:03.668163   41236 command_runner.go:130] > #      only add mounts it finds in this file.
	I0722 00:16:03.668166   41236 command_runner.go:130] > #
	I0722 00:16:03.668170   41236 command_runner.go:130] > # default_mounts_file = ""
	I0722 00:16:03.668175   41236 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0722 00:16:03.668185   41236 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0722 00:16:03.668191   41236 command_runner.go:130] > pids_limit = 1024
	I0722 00:16:03.668197   41236 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0722 00:16:03.668205   41236 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0722 00:16:03.668211   41236 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0722 00:16:03.668220   41236 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0722 00:16:03.668226   41236 command_runner.go:130] > # log_size_max = -1
	I0722 00:16:03.668233   41236 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0722 00:16:03.668262   41236 command_runner.go:130] > # log_to_journald = false
	I0722 00:16:03.668275   41236 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0722 00:16:03.668280   41236 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0722 00:16:03.668286   41236 command_runner.go:130] > # Path to directory for container attach sockets.
	I0722 00:16:03.668291   41236 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0722 00:16:03.668298   41236 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0722 00:16:03.668302   41236 command_runner.go:130] > # bind_mount_prefix = ""
	I0722 00:16:03.668309   41236 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0722 00:16:03.668314   41236 command_runner.go:130] > # read_only = false
	I0722 00:16:03.668320   41236 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0722 00:16:03.668334   41236 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0722 00:16:03.668340   41236 command_runner.go:130] > # live configuration reload.
	I0722 00:16:03.668344   41236 command_runner.go:130] > # log_level = "info"
	I0722 00:16:03.668350   41236 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0722 00:16:03.668355   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.668361   41236 command_runner.go:130] > # log_filter = ""
	I0722 00:16:03.668367   41236 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0722 00:16:03.668376   41236 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0722 00:16:03.668379   41236 command_runner.go:130] > # separated by comma.
	I0722 00:16:03.668387   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668392   41236 command_runner.go:130] > # uid_mappings = ""
	I0722 00:16:03.668398   41236 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0722 00:16:03.668405   41236 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0722 00:16:03.668409   41236 command_runner.go:130] > # separated by comma.
	I0722 00:16:03.668418   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668424   41236 command_runner.go:130] > # gid_mappings = ""
	I0722 00:16:03.668430   41236 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0722 00:16:03.668438   41236 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 00:16:03.668448   41236 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 00:16:03.668457   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668462   41236 command_runner.go:130] > # minimum_mappable_uid = -1
	I0722 00:16:03.668467   41236 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0722 00:16:03.668473   41236 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0722 00:16:03.668481   41236 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0722 00:16:03.668494   41236 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0722 00:16:03.668506   41236 command_runner.go:130] > # minimum_mappable_gid = -1
	I0722 00:16:03.668518   41236 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0722 00:16:03.668529   41236 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0722 00:16:03.668540   41236 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0722 00:16:03.668548   41236 command_runner.go:130] > # ctr_stop_timeout = 30
	I0722 00:16:03.668560   41236 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0722 00:16:03.668569   41236 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0722 00:16:03.668576   41236 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0722 00:16:03.668580   41236 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0722 00:16:03.668584   41236 command_runner.go:130] > drop_infra_ctr = false
	I0722 00:16:03.668590   41236 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0722 00:16:03.668595   41236 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0722 00:16:03.668601   41236 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0722 00:16:03.668605   41236 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0722 00:16:03.668611   41236 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0722 00:16:03.668616   41236 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0722 00:16:03.668621   41236 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0722 00:16:03.668625   41236 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0722 00:16:03.668629   41236 command_runner.go:130] > # shared_cpuset = ""
	I0722 00:16:03.668634   41236 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0722 00:16:03.668638   41236 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0722 00:16:03.668642   41236 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0722 00:16:03.668649   41236 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0722 00:16:03.668653   41236 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0722 00:16:03.668658   41236 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0722 00:16:03.668665   41236 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0722 00:16:03.668669   41236 command_runner.go:130] > # enable_criu_support = false
	I0722 00:16:03.668674   41236 command_runner.go:130] > # Enable/disable the generation of the container,
	I0722 00:16:03.668679   41236 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0722 00:16:03.668692   41236 command_runner.go:130] > # enable_pod_events = false
	I0722 00:16:03.668701   41236 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 00:16:03.668709   41236 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0722 00:16:03.668717   41236 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0722 00:16:03.668721   41236 command_runner.go:130] > # default_runtime = "runc"
	I0722 00:16:03.668727   41236 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0722 00:16:03.668734   41236 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0722 00:16:03.668745   41236 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0722 00:16:03.668752   41236 command_runner.go:130] > # creation as a file is not desired either.
	I0722 00:16:03.668762   41236 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0722 00:16:03.668767   41236 command_runner.go:130] > # the hostname is being managed dynamically.
	I0722 00:16:03.668773   41236 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0722 00:16:03.668776   41236 command_runner.go:130] > # ]
	I0722 00:16:03.668781   41236 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0722 00:16:03.668789   41236 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0722 00:16:03.668797   41236 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0722 00:16:03.668803   41236 command_runner.go:130] > # Each entry in the table should follow the format:
	I0722 00:16:03.668807   41236 command_runner.go:130] > #
	I0722 00:16:03.668811   41236 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0722 00:16:03.668818   41236 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0722 00:16:03.668861   41236 command_runner.go:130] > # runtime_type = "oci"
	I0722 00:16:03.668867   41236 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0722 00:16:03.668872   41236 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0722 00:16:03.668876   41236 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0722 00:16:03.668881   41236 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0722 00:16:03.668887   41236 command_runner.go:130] > # monitor_env = []
	I0722 00:16:03.668891   41236 command_runner.go:130] > # privileged_without_host_devices = false
	I0722 00:16:03.668895   41236 command_runner.go:130] > # allowed_annotations = []
	I0722 00:16:03.668899   41236 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0722 00:16:03.668905   41236 command_runner.go:130] > # Where:
	I0722 00:16:03.668910   41236 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0722 00:16:03.668916   41236 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0722 00:16:03.668924   41236 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0722 00:16:03.668930   41236 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0722 00:16:03.668936   41236 command_runner.go:130] > #   in $PATH.
	I0722 00:16:03.668941   41236 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0722 00:16:03.668950   41236 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0722 00:16:03.668958   41236 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0722 00:16:03.668961   41236 command_runner.go:130] > #   state.
	I0722 00:16:03.668967   41236 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0722 00:16:03.668975   41236 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0722 00:16:03.668980   41236 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0722 00:16:03.668985   41236 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0722 00:16:03.668992   41236 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0722 00:16:03.668998   41236 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0722 00:16:03.669006   41236 command_runner.go:130] > #   The currently recognized values are:
	I0722 00:16:03.669012   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0722 00:16:03.669020   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0722 00:16:03.669025   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0722 00:16:03.669033   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0722 00:16:03.669040   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0722 00:16:03.669047   41236 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0722 00:16:03.669053   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0722 00:16:03.669061   41236 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0722 00:16:03.669067   41236 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0722 00:16:03.669072   41236 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0722 00:16:03.669077   41236 command_runner.go:130] > #   deprecated option "conmon".
	I0722 00:16:03.669083   41236 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0722 00:16:03.669090   41236 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0722 00:16:03.669096   41236 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0722 00:16:03.669101   41236 command_runner.go:130] > #   should be moved to the container's cgroup
	I0722 00:16:03.669107   41236 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0722 00:16:03.669112   41236 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0722 00:16:03.669118   41236 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0722 00:16:03.669124   41236 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0722 00:16:03.669127   41236 command_runner.go:130] > #
	I0722 00:16:03.669131   41236 command_runner.go:130] > # Using the seccomp notifier feature:
	I0722 00:16:03.669136   41236 command_runner.go:130] > #
	I0722 00:16:03.669142   41236 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0722 00:16:03.669150   41236 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0722 00:16:03.669153   41236 command_runner.go:130] > #
	I0722 00:16:03.669158   41236 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0722 00:16:03.669171   41236 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0722 00:16:03.669175   41236 command_runner.go:130] > #
	I0722 00:16:03.669180   41236 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0722 00:16:03.669186   41236 command_runner.go:130] > # feature.
	I0722 00:16:03.669189   41236 command_runner.go:130] > #
	I0722 00:16:03.669194   41236 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0722 00:16:03.669200   41236 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0722 00:16:03.669205   41236 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0722 00:16:03.669213   41236 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0722 00:16:03.669220   41236 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0722 00:16:03.669223   41236 command_runner.go:130] > #
	I0722 00:16:03.669228   41236 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0722 00:16:03.669236   41236 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0722 00:16:03.669239   41236 command_runner.go:130] > #
	I0722 00:16:03.669244   41236 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0722 00:16:03.669249   41236 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0722 00:16:03.669253   41236 command_runner.go:130] > #
	I0722 00:16:03.669258   41236 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0722 00:16:03.669266   41236 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0722 00:16:03.669269   41236 command_runner.go:130] > # limitation.
	I0722 00:16:03.669274   41236 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0722 00:16:03.669281   41236 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0722 00:16:03.669284   41236 command_runner.go:130] > runtime_type = "oci"
	I0722 00:16:03.669288   41236 command_runner.go:130] > runtime_root = "/run/runc"
	I0722 00:16:03.669296   41236 command_runner.go:130] > runtime_config_path = ""
	I0722 00:16:03.669303   41236 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0722 00:16:03.669307   41236 command_runner.go:130] > monitor_cgroup = "pod"
	I0722 00:16:03.669311   41236 command_runner.go:130] > monitor_exec_cgroup = ""
	I0722 00:16:03.669315   41236 command_runner.go:130] > monitor_env = [
	I0722 00:16:03.669320   41236 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0722 00:16:03.669329   41236 command_runner.go:130] > ]
	I0722 00:16:03.669333   41236 command_runner.go:130] > privileged_without_host_devices = false
	I0722 00:16:03.669340   41236 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0722 00:16:03.669347   41236 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0722 00:16:03.669352   41236 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0722 00:16:03.669361   41236 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0722 00:16:03.669372   41236 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0722 00:16:03.669380   41236 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0722 00:16:03.669388   41236 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0722 00:16:03.669397   41236 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0722 00:16:03.669403   41236 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0722 00:16:03.669409   41236 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0722 00:16:03.669412   41236 command_runner.go:130] > # Example:
	I0722 00:16:03.669416   41236 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0722 00:16:03.669420   41236 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0722 00:16:03.669425   41236 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0722 00:16:03.669431   41236 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0722 00:16:03.669434   41236 command_runner.go:130] > # cpuset = 0
	I0722 00:16:03.669438   41236 command_runner.go:130] > # cpushares = "0-1"
	I0722 00:16:03.669441   41236 command_runner.go:130] > # Where:
	I0722 00:16:03.669445   41236 command_runner.go:130] > # The workload name is workload-type.
	I0722 00:16:03.669451   41236 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0722 00:16:03.669456   41236 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0722 00:16:03.669461   41236 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0722 00:16:03.669468   41236 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0722 00:16:03.669473   41236 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0722 00:16:03.669477   41236 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0722 00:16:03.669483   41236 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0722 00:16:03.669486   41236 command_runner.go:130] > # Default value is set to true
	I0722 00:16:03.669490   41236 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0722 00:16:03.669495   41236 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0722 00:16:03.669499   41236 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0722 00:16:03.669503   41236 command_runner.go:130] > # Default value is set to 'false'
	I0722 00:16:03.669507   41236 command_runner.go:130] > # disable_hostport_mapping = false
	I0722 00:16:03.669512   41236 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0722 00:16:03.669514   41236 command_runner.go:130] > #
	I0722 00:16:03.669520   41236 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0722 00:16:03.669525   41236 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0722 00:16:03.669530   41236 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0722 00:16:03.669536   41236 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0722 00:16:03.669541   41236 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0722 00:16:03.669544   41236 command_runner.go:130] > [crio.image]
	I0722 00:16:03.669553   41236 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0722 00:16:03.669557   41236 command_runner.go:130] > # default_transport = "docker://"
	I0722 00:16:03.669563   41236 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0722 00:16:03.669568   41236 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0722 00:16:03.669572   41236 command_runner.go:130] > # global_auth_file = ""
	I0722 00:16:03.669579   41236 command_runner.go:130] > # The image used to instantiate infra containers.
	I0722 00:16:03.669587   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.669591   41236 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0722 00:16:03.669597   41236 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0722 00:16:03.669604   41236 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0722 00:16:03.669608   41236 command_runner.go:130] > # This option supports live configuration reload.
	I0722 00:16:03.669616   41236 command_runner.go:130] > # pause_image_auth_file = ""
	I0722 00:16:03.669622   41236 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0722 00:16:03.669628   41236 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0722 00:16:03.669634   41236 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0722 00:16:03.669640   41236 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0722 00:16:03.669648   41236 command_runner.go:130] > # pause_command = "/pause"
	I0722 00:16:03.669656   41236 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0722 00:16:03.669662   41236 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0722 00:16:03.669667   41236 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0722 00:16:03.669675   41236 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0722 00:16:03.669682   41236 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0722 00:16:03.669690   41236 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0722 00:16:03.669693   41236 command_runner.go:130] > # pinned_images = [
	I0722 00:16:03.669699   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669705   41236 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0722 00:16:03.669712   41236 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0722 00:16:03.669718   41236 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0722 00:16:03.669724   41236 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0722 00:16:03.669729   41236 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0722 00:16:03.669733   41236 command_runner.go:130] > # signature_policy = ""
	I0722 00:16:03.669738   41236 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0722 00:16:03.669747   41236 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0722 00:16:03.669753   41236 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0722 00:16:03.669761   41236 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0722 00:16:03.669766   41236 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0722 00:16:03.669776   41236 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0722 00:16:03.669782   41236 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0722 00:16:03.669790   41236 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0722 00:16:03.669794   41236 command_runner.go:130] > # changing them here.
	I0722 00:16:03.669798   41236 command_runner.go:130] > # insecure_registries = [
	I0722 00:16:03.669801   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669807   41236 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0722 00:16:03.669812   41236 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0722 00:16:03.669816   41236 command_runner.go:130] > # image_volumes = "mkdir"
	I0722 00:16:03.669823   41236 command_runner.go:130] > # Temporary directory to use for storing big files
	I0722 00:16:03.669827   41236 command_runner.go:130] > # big_files_temporary_dir = ""
	I0722 00:16:03.669839   41236 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0722 00:16:03.669843   41236 command_runner.go:130] > # CNI plugins.
	I0722 00:16:03.669847   41236 command_runner.go:130] > [crio.network]
	I0722 00:16:03.669852   41236 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0722 00:16:03.669858   41236 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0722 00:16:03.669862   41236 command_runner.go:130] > # cni_default_network = ""
	I0722 00:16:03.669869   41236 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0722 00:16:03.669873   41236 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0722 00:16:03.669878   41236 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0722 00:16:03.669884   41236 command_runner.go:130] > # plugin_dirs = [
	I0722 00:16:03.669887   41236 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0722 00:16:03.669890   41236 command_runner.go:130] > # ]
	I0722 00:16:03.669895   41236 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0722 00:16:03.669900   41236 command_runner.go:130] > [crio.metrics]
	I0722 00:16:03.669904   41236 command_runner.go:130] > # Globally enable or disable metrics support.
	I0722 00:16:03.669908   41236 command_runner.go:130] > enable_metrics = true
	I0722 00:16:03.669912   41236 command_runner.go:130] > # Specify enabled metrics collectors.
	I0722 00:16:03.669916   41236 command_runner.go:130] > # Per default all metrics are enabled.
	I0722 00:16:03.669922   41236 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0722 00:16:03.669930   41236 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0722 00:16:03.669935   41236 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0722 00:16:03.669941   41236 command_runner.go:130] > # metrics_collectors = [
	I0722 00:16:03.669944   41236 command_runner.go:130] > # 	"operations",
	I0722 00:16:03.669949   41236 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0722 00:16:03.669953   41236 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0722 00:16:03.669963   41236 command_runner.go:130] > # 	"operations_errors",
	I0722 00:16:03.669967   41236 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0722 00:16:03.669974   41236 command_runner.go:130] > # 	"image_pulls_by_name",
	I0722 00:16:03.669977   41236 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0722 00:16:03.669981   41236 command_runner.go:130] > # 	"image_pulls_failures",
	I0722 00:16:03.669988   41236 command_runner.go:130] > # 	"image_pulls_successes",
	I0722 00:16:03.669991   41236 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0722 00:16:03.669995   41236 command_runner.go:130] > # 	"image_layer_reuse",
	I0722 00:16:03.669999   41236 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0722 00:16:03.670003   41236 command_runner.go:130] > # 	"containers_oom_total",
	I0722 00:16:03.670006   41236 command_runner.go:130] > # 	"containers_oom",
	I0722 00:16:03.670010   41236 command_runner.go:130] > # 	"processes_defunct",
	I0722 00:16:03.670013   41236 command_runner.go:130] > # 	"operations_total",
	I0722 00:16:03.670017   41236 command_runner.go:130] > # 	"operations_latency_seconds",
	I0722 00:16:03.670021   41236 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0722 00:16:03.670025   41236 command_runner.go:130] > # 	"operations_errors_total",
	I0722 00:16:03.670029   41236 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0722 00:16:03.670033   41236 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0722 00:16:03.670040   41236 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0722 00:16:03.670044   41236 command_runner.go:130] > # 	"image_pulls_success_total",
	I0722 00:16:03.670050   41236 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0722 00:16:03.670056   41236 command_runner.go:130] > # 	"containers_oom_count_total",
	I0722 00:16:03.670061   41236 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0722 00:16:03.670065   41236 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0722 00:16:03.670071   41236 command_runner.go:130] > # ]
	I0722 00:16:03.670076   41236 command_runner.go:130] > # The port on which the metrics server will listen.
	I0722 00:16:03.670079   41236 command_runner.go:130] > # metrics_port = 9090
	I0722 00:16:03.670084   41236 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0722 00:16:03.670089   41236 command_runner.go:130] > # metrics_socket = ""
	I0722 00:16:03.670094   41236 command_runner.go:130] > # The certificate for the secure metrics server.
	I0722 00:16:03.670100   41236 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0722 00:16:03.670108   41236 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0722 00:16:03.670112   41236 command_runner.go:130] > # certificate on any modification event.
	I0722 00:16:03.670115   41236 command_runner.go:130] > # metrics_cert = ""
	I0722 00:16:03.670120   41236 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0722 00:16:03.670127   41236 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0722 00:16:03.670135   41236 command_runner.go:130] > # metrics_key = ""
	I0722 00:16:03.670142   41236 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0722 00:16:03.670146   41236 command_runner.go:130] > [crio.tracing]
	I0722 00:16:03.670151   41236 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0722 00:16:03.670155   41236 command_runner.go:130] > # enable_tracing = false
	I0722 00:16:03.670159   41236 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0722 00:16:03.670164   41236 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0722 00:16:03.670170   41236 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0722 00:16:03.670176   41236 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0722 00:16:03.670180   41236 command_runner.go:130] > # CRI-O NRI configuration.
	I0722 00:16:03.670183   41236 command_runner.go:130] > [crio.nri]
	I0722 00:16:03.670188   41236 command_runner.go:130] > # Globally enable or disable NRI.
	I0722 00:16:03.670191   41236 command_runner.go:130] > # enable_nri = false
	I0722 00:16:03.670196   41236 command_runner.go:130] > # NRI socket to listen on.
	I0722 00:16:03.670200   41236 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0722 00:16:03.670204   41236 command_runner.go:130] > # NRI plugin directory to use.
	I0722 00:16:03.670208   41236 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0722 00:16:03.670213   41236 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0722 00:16:03.670219   41236 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0722 00:16:03.670224   41236 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0722 00:16:03.670230   41236 command_runner.go:130] > # nri_disable_connections = false
	I0722 00:16:03.670235   41236 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0722 00:16:03.670240   41236 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0722 00:16:03.670246   41236 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0722 00:16:03.670250   41236 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0722 00:16:03.670258   41236 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0722 00:16:03.670262   41236 command_runner.go:130] > [crio.stats]
	I0722 00:16:03.670271   41236 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0722 00:16:03.670279   41236 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0722 00:16:03.670283   41236 command_runner.go:130] > # stats_collection_period = 0
	I0722 00:16:03.670423   41236 cni.go:84] Creating CNI manager for ""
	I0722 00:16:03.670434   41236 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 00:16:03.670445   41236 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:16:03.670467   41236 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-332426 NodeName:multinode-332426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:16:03.670585   41236 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-332426"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:16:03.670669   41236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:16:03.680252   41236 command_runner.go:130] > kubeadm
	I0722 00:16:03.680269   41236 command_runner.go:130] > kubectl
	I0722 00:16:03.680275   41236 command_runner.go:130] > kubelet
	I0722 00:16:03.680317   41236 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:16:03.680368   41236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:16:03.689319   41236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0722 00:16:03.705214   41236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:16:03.720584   41236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 00:16:03.735802   41236 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0722 00:16:03.739155   41236 command_runner.go:130] > 192.168.39.67	control-plane.minikube.internal
	I0722 00:16:03.739295   41236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:16:03.870043   41236 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:16:03.884645   41236 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426 for IP: 192.168.39.67
	I0722 00:16:03.884664   41236 certs.go:194] generating shared ca certs ...
	I0722 00:16:03.884683   41236 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:16:03.884841   41236 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:16:03.884892   41236 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:16:03.884906   41236 certs.go:256] generating profile certs ...
	I0722 00:16:03.884999   41236 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/client.key
	I0722 00:16:03.885075   41236 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key.b93420c1
	I0722 00:16:03.885131   41236 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key
	I0722 00:16:03.885144   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:16:03.885169   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:16:03.885188   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:16:03.885203   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:16:03.885226   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:16:03.885253   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:16:03.885272   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:16:03.885289   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:16:03.885354   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:16:03.885398   41236 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:16:03.885412   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:16:03.885451   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:16:03.885491   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:16:03.885521   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:16:03.885581   41236 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:16:03.885635   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:03.885654   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem -> /usr/share/ca-certificates/12263.pem
	I0722 00:16:03.885672   41236 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> /usr/share/ca-certificates/122632.pem
	I0722 00:16:03.886960   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:16:03.910693   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:16:03.932387   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:16:03.953903   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:16:03.975334   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:16:03.997368   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:16:04.018872   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:16:04.039827   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/multinode-332426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:16:04.061057   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:16:04.082572   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:16:04.103991   41236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:16:04.125131   41236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:16:04.140030   41236 ssh_runner.go:195] Run: openssl version
	I0722 00:16:04.145229   41236 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 00:16:04.145361   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:16:04.154826   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.158953   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.158988   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.159038   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:16:04.164187   41236 command_runner.go:130] > 51391683
	I0722 00:16:04.164254   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:16:04.172546   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:16:04.182156   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186005   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186097   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.186147   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:16:04.191191   41236 command_runner.go:130] > 3ec20f2e
	I0722 00:16:04.191257   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:16:04.199765   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:16:04.209604   41236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213560   41236 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213588   41236 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.213627   41236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:16:04.218764   41236 command_runner.go:130] > b5213941
	I0722 00:16:04.218927   41236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:16:04.227452   41236 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:16:04.231500   41236 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:16:04.231526   41236 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0722 00:16:04.231535   41236 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0722 00:16:04.231545   41236 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 00:16:04.231554   41236 command_runner.go:130] > Access: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231562   41236 command_runner.go:130] > Modify: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231569   41236 command_runner.go:130] > Change: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231576   41236 command_runner.go:130] >  Birth: 2024-07-22 00:09:20.218897765 +0000
	I0722 00:16:04.231634   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:16:04.236788   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.236924   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:16:04.242056   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.242100   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:16:04.247211   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.247252   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:16:04.252139   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.252312   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:16:04.257176   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.257318   41236 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:16:04.262426   41236 command_runner.go:130] > Certificate will not expire
	I0722 00:16:04.262489   41236 kubeadm.go:392] StartCluster: {Name:multinode-332426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-332426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:16:04.262641   41236 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:16:04.262693   41236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:16:04.296982   41236 command_runner.go:130] > bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846
	I0722 00:16:04.297014   41236 command_runner.go:130] > 5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de
	I0722 00:16:04.297023   41236 command_runner.go:130] > be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7
	I0722 00:16:04.297035   41236 command_runner.go:130] > 84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421
	I0722 00:16:04.297043   41236 command_runner.go:130] > cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7
	I0722 00:16:04.297052   41236 command_runner.go:130] > 6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24
	I0722 00:16:04.297060   41236 command_runner.go:130] > d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b
	I0722 00:16:04.297071   41236 command_runner.go:130] > 0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea
	I0722 00:16:04.297100   41236 cri.go:89] found id: "bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846"
	I0722 00:16:04.297110   41236 cri.go:89] found id: "5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de"
	I0722 00:16:04.297115   41236 cri.go:89] found id: "be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7"
	I0722 00:16:04.297119   41236 cri.go:89] found id: "84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421"
	I0722 00:16:04.297123   41236 cri.go:89] found id: "cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7"
	I0722 00:16:04.297128   41236 cri.go:89] found id: "6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24"
	I0722 00:16:04.297132   41236 cri.go:89] found id: "d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b"
	I0722 00:16:04.297136   41236 cri.go:89] found id: "0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea"
	I0722 00:16:04.297140   41236 cri.go:89] found id: ""
	I0722 00:16:04.297188   41236 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.204210389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607615204183343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4e9087d-062a-43f8-873e-52aa509d4d12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.204626592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30f91a67-17fc-49ad-91eb-5032aa489bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.204680290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30f91a67-17fc-49ad-91eb-5032aa489bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.205012174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30f91a67-17fc-49ad-91eb-5032aa489bae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.244835233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1662ad50-7538-4b9d-a46f-58298ad466c6 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.244908203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1662ad50-7538-4b9d-a46f-58298ad466c6 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.246229171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0372fbd-560e-47dc-b13a-e20bd93a3120 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.246928458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607615246875767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0372fbd-560e-47dc-b13a-e20bd93a3120 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.247448088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c992cb4e-f275-4a4c-8c5e-5fe9f3f9beba name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.247526686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c992cb4e-f275-4a4c-8c5e-5fe9f3f9beba name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.247872457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c992cb4e-f275-4a4c-8c5e-5fe9f3f9beba name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.287209427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f69a8be2-2cfe-41ce-9d18-8be4d611bcac name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.287298457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f69a8be2-2cfe-41ce-9d18-8be4d611bcac name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.288450474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=317c60e9-9d2c-4bff-8149-ddca96446db7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.289139727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607615289112440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=317c60e9-9d2c-4bff-8149-ddca96446db7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.294300289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=469a62cc-2ab0-4c0a-baad-fc67a7959481 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.294428790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=469a62cc-2ab0-4c0a-baad-fc67a7959481 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.294743411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=469a62cc-2ab0-4c0a-baad-fc67a7959481 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.333769850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e6ed74a-448e-46e8-b15c-a4029ff61e42 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.333842860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e6ed74a-448e-46e8-b15c-a4029ff61e42 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.334927232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bde52f8b-324b-4375-810f-ff4a33aa560a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.335475083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721607615335445699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bde52f8b-324b-4375-810f-ff4a33aa560a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.336048134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ba76491-b919-44bf-a697-9dffb71c68c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.336101260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ba76491-b919-44bf-a697-9dffb71c68c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:20:15 multinode-332426 crio[2855]: time="2024-07-22 00:20:15.336477083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74e365f9a940e3cd15b0f86dd171eadaeca29cf74b7d57829e01a412f2b63b29,PodSandboxId:891d28ebec68e635cb101ce0d11b46b33643979cb2b2cf3082273157a29eb80d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721607404076720989,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501,PodSandboxId:deb2045cfa5a5718f27e74cce48a7747be515c460417aa4f3b58b26edb0d98a3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721607370550629349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad,PodSandboxId:a5d86fab1d641a2ff38a9e9de8942b08b1fbde9f32161e93b1f0136db0c8a5b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721607370420683446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edc446bb04896ca905fae0a4b6da09ae02715062a8fe8136c49b2053071a5902,PodSandboxId:3b7dccc46d65ab5a45bd1aafdc3b5d260a5054d941f0c1e0a878532c0b12bbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721607370347750826,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},An
notations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd,PodSandboxId:26cfa9830a4731bd59fb2ecec688c192ae76c501e5ad580957bff50d387e079c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721607370351748330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2,PodSandboxId:a1eaa28dfbc32e7eeaec24d6c82b901a54c3d2173b8a21a4bb602f37860118e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721607366550872229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotations:map[string]string{io.kubernetes.container.hash: bc690f80,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55,PodSandboxId:abf2787e3e3ddb6f9d7c65fc47aff9d18cc012faf441f614ad8bafb50b5b0271,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721607366528535782,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9,PodSandboxId:76bcada916ab25be3ba089a808dc1b49b1e8e949f0cc4ad3519db3e287bec768,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721607366502021903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8,PodSandboxId:611b738c9a42901bef5746c6a5062e10ed2b80a0d365e8bc7699279af871e649,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721607366471377575,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865d5fd7e8026011ea33af66b5cd14f914a97dc721a10b278825bb9ff83a10dd,PodSandboxId:0e8b20f046d00ddc310638b441bf2452abd9e13498b11dd8d02462dd7169cee5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721607050406276139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-d4fqv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 303001b7-6534-4dcf-8179-14278c447b01,},Annotations:map[string]string{io.kubernetes.container.hash: 4633a8fb,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846,PodSandboxId:5b314dc7aee93075a178107b9a08aa8c52545f2cd99901092055b111e993231a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721606999160809325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kgmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c759a961-9e1a-4487-8e22-50b46a782fc1,},Annotations:map[string]string{io.kubernetes.container.hash: ca9a8650,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa33729a9f2c6579b436a8648d9425603469fec732ba79865cf701a8a112de,PodSandboxId:618c48779675e1886825de4aa794ab0bb196417b6cf21dda6740c765485b91c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721606999158508559,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: eb343e45-5269-4f6d-81cc-ff99ee75d01e,},Annotations:map[string]string{io.kubernetes.container.hash: ed7f5465,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7,PodSandboxId:841fb5d9879ce282583f3b4a60d0beda36ef35abf98c9e6e351dd77d63c3734f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721606987583988092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8hmt4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d6945ba2-29c0-406e-aa81-491a78d7f5b6,},Annotations:map[string]string{io.kubernetes.container.hash: ca784296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421,PodSandboxId:7cbccdfd31f6cf028a08626712316b40dd780e4bea6427e8a57d69db8564e2d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721606983692911141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lj2fx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d,},Annotations:map[string]string{io.kubernetes.container.hash: 3be9c462,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7,PodSandboxId:bc26157c8e97bb849699d9f16d4060385f543523ac0a258140e1b55245d36ae4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721606963925009138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
541a4237cbbb5eb1a707c8d92be72855,},Annotations:map[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24,PodSandboxId:6023bbb4eb5ac5ff4ede3da90c5b263663d0953a214bb588e4e049094c057589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721606963887376353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d7d9b7a6730a8f4c354c39af5312cbc,},Annotation
s:map[string]string{io.kubernetes.container.hash: bc690f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b,PodSandboxId:d5ba48a08b5b1651507acc0fcbb246aed3240ef94155cb8178fb2fd45361d388,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721606963878152937,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e649a404b6b8eed590ed6566820afb6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea,PodSandboxId:8943995b24cf19442343a44ce5837c11a58c9ea0c76fa95b10390c1a16ed3c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721606963810452872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-332426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a97d7a8340db7a22714382f343c4ea17,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ba76491-b919-44bf-a697-9dffb71c68c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74e365f9a940e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   891d28ebec68e       busybox-fc5497c4f-d4fqv
	54bcf11731a98       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   deb2045cfa5a5       kindnet-8hmt4
	0376b5ca7df77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a5d86fab1d641       coredns-7db6d8ff4d-kgmn4
	45562da2aee19       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   26cfa9830a473       kube-proxy-lj2fx
	edc446bb04896       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   3b7dccc46d65a       storage-provisioner
	d82aa1b89b938       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   a1eaa28dfbc32       etcd-multinode-332426
	ccf2fa4343e40       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   abf2787e3e3dd       kube-controller-manager-multinode-332426
	0d3a51dbfdecc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   76bcada916ab2       kube-scheduler-multinode-332426
	8ca1b020e36a1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   611b738c9a429       kube-apiserver-multinode-332426
	865d5fd7e8026       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   0e8b20f046d00       busybox-fc5497c4f-d4fqv
	bf77a115bd3b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   5b314dc7aee93       coredns-7db6d8ff4d-kgmn4
	5ffa33729a9f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   618c48779675e       storage-provisioner
	be5af95309f16       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   841fb5d9879ce       kindnet-8hmt4
	84be68af94193       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   7cbccdfd31f6c       kube-proxy-lj2fx
	cb8198ba979fc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   bc26157c8e97b       kube-apiserver-multinode-332426
	6640fb78d9d74       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   6023bbb4eb5ac       etcd-multinode-332426
	d1fe9fff883b0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   d5ba48a08b5b1       kube-scheduler-multinode-332426
	0b655b503e2b5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   8943995b24cf1       kube-controller-manager-multinode-332426
	
	
	==> coredns [0376b5ca7df77acb823a9edf98a98de76d8921e29cb3f3d83b6ca8e80dd9adad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35252 - 30027 "HINFO IN 2016048654068247314.6170083794807566555. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015554366s
	
	
	==> coredns [bf77a115bd3b5c1d49cb1f717c0a79efd821efafa29b2a4612fa496db13e3846] <==
	[INFO] 10.244.1.2:50179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621508s
	[INFO] 10.244.1.2:59257 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086082s
	[INFO] 10.244.1.2:54360 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084272s
	[INFO] 10.244.1.2:49805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001221189s
	[INFO] 10.244.1.2:56137 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081096s
	[INFO] 10.244.1.2:39288 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064923s
	[INFO] 10.244.1.2:53603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168163s
	[INFO] 10.244.0.3:33445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088238s
	[INFO] 10.244.0.3:60751 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080196s
	[INFO] 10.244.0.3:49851 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050668s
	[INFO] 10.244.0.3:58365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097549s
	[INFO] 10.244.1.2:37491 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122295s
	[INFO] 10.244.1.2:45475 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150054s
	[INFO] 10.244.1.2:47471 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089484s
	[INFO] 10.244.1.2:50935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088533s
	[INFO] 10.244.0.3:32821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115775s
	[INFO] 10.244.0.3:33144 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174417s
	[INFO] 10.244.0.3:40417 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086886s
	[INFO] 10.244.0.3:36269 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121613s
	[INFO] 10.244.1.2:56272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111783s
	[INFO] 10.244.1.2:58196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109756s
	[INFO] 10.244.1.2:41786 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007171s
	[INFO] 10.244.1.2:55219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083701s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-332426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-332426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-332426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_09_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-332426
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:20:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:16:09 +0000   Mon, 22 Jul 2024 00:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-332426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72d19522baaf4f0f8c46bd95cf97927b
	  System UUID:                72d19522-baaf-4f0f-8c46-bd95cf97927b
	  Boot ID:                    a7af36a1-0feb-4ad7-b1f5-c8b7a5023aa8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d4fqv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                 coredns-7db6d8ff4d-kgmn4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-332426                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-8hmt4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-332426             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-332426    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-lj2fx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-332426             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-332426 event: Registered Node multinode-332426 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-332426 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-332426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-332426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-332426 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node multinode-332426 event: Registered Node multinode-332426 in Controller
	
	
	Name:               multinode-332426-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-332426-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-332426
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_16_52_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:16:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-332426-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:17:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:18:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:18:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:18:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 00:17:21 +0000   Mon, 22 Jul 2024 00:18:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-332426-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e55c65b8a45541329327c2cf589759eb
	  System UUID:                e55c65b8-a455-4132-9327-c2cf589759eb
	  Boot ID:                    63ca0462-1dcc-4320-ab37-0c4e5a009724
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6ldsm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-fx662              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m49s
	  kube-system                 kube-proxy-rjx57           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m49s (x2 over 9m49s)  kubelet          Node multinode-332426-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m49s (x2 over 9m49s)  kubelet          Node multinode-332426-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m49s (x2 over 9m49s)  kubelet          Node multinode-332426-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m30s                  kubelet          Node multinode-332426-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-332426-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-332426-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-332426-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-332426-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-332426-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056950] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051918] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.176767] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.113851] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.249101] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.875821] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +3.839021] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.059762] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.476364] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.084302] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.375047] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.701257] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +5.138911] kauditd_printk_skb: 59 callbacks suppressed
	[Jul22 00:10] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 00:16] systemd-fstab-generator[2774]: Ignoring "noauto" option for root device
	[  +0.143610] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.169574] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.138935] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
	[  +0.259073] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.652540] systemd-fstab-generator[2938]: Ignoring "noauto" option for root device
	[  +1.832847] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +4.672332] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.226435] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.995520] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[ +17.535189] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6640fb78d9d74e4219aded65f09ad1f2dc418cdeda6fb88255b7c6ab10907e24] <==
	{"level":"warn","ts":"2024-07-22T00:10:26.547049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.774046ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3536892338775504069 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-332426-m02.17e460780bfa017c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-332426-m02.17e460780bfa017c\" value_size:640 lease:3536892338775503057 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T00:10:26.547872Z","caller":"traceutil/trace.go:171","msg":"trace[290633637] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"174.947128ms","start":"2024-07-22T00:10:26.372891Z","end":"2024-07-22T00:10:26.547839Z","steps":["trace[290633637] 'process raft request'  (duration: 174.888271ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:10:26.54811Z","caller":"traceutil/trace.go:171","msg":"trace[804757579] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"246.522571ms","start":"2024-07-22T00:10:26.301575Z","end":"2024-07-22T00:10:26.548098Z","steps":["trace[804757579] 'process raft request'  (duration: 75.548913ms)","trace[804757579] 'compare'  (duration: 169.487172ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:10:26.548346Z","caller":"traceutil/trace.go:171","msg":"trace[2035800605] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"245.58864ms","start":"2024-07-22T00:10:26.302714Z","end":"2024-07-22T00:10:26.548303Z","steps":["trace[2035800605] 'read index received'  (duration: 74.415418ms)","trace[2035800605] 'applied index is now lower than readState.Index'  (duration: 171.172336ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:10:26.548682Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.948916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-332426-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:10:26.548994Z","caller":"traceutil/trace.go:171","msg":"trace[2127525726] range","detail":"{range_begin:/registry/csinodes/multinode-332426-m02; range_end:; response_count:0; response_revision:491; }","duration":"246.252369ms","start":"2024-07-22T00:10:26.302694Z","end":"2024-07-22T00:10:26.548946Z","steps":["trace[2127525726] 'agreement among raft nodes before linearized reading'  (duration: 245.866074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:10:26.549762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.003849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-332426-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-22T00:10:26.550014Z","caller":"traceutil/trace.go:171","msg":"trace[1542341300] range","detail":"{range_begin:/registry/minions/multinode-332426-m02; range_end:; response_count:1; response_revision:491; }","duration":"247.260778ms","start":"2024-07-22T00:10:26.302739Z","end":"2024-07-22T00:10:26.55Z","steps":["trace[1542341300] 'agreement among raft nodes before linearized reading'  (duration: 246.988231ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:11:18.944262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.525279ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3536892338775504511 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-332426-m03.17e460843f67a3a7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-332426-m03.17e460843f67a3a7\" value_size:642 lease:3536892338775504109 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T00:11:18.944527Z","caller":"traceutil/trace.go:171","msg":"trace[2028451399] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:662; }","duration":"147.873927ms","start":"2024-07-22T00:11:18.796643Z","end":"2024-07-22T00:11:18.944517Z","steps":["trace[2028451399] 'read index received'  (duration: 146.414006ms)","trace[2028451399] 'applied index is now lower than readState.Index'  (duration: 1.459316ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:11:18.944598Z","caller":"traceutil/trace.go:171","msg":"trace[2028221205] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"240.001189ms","start":"2024-07-22T00:11:18.70459Z","end":"2024-07-22T00:11:18.944591Z","steps":["trace[2028221205] 'process raft request'  (duration: 74.072658ms)","trace[2028221205] 'compare'  (duration: 165.354087ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-22T00:11:18.944796Z","caller":"traceutil/trace.go:171","msg":"trace[1126455199] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"187.400127ms","start":"2024-07-22T00:11:18.757387Z","end":"2024-07-22T00:11:18.944788Z","steps":["trace[1126455199] 'process raft request'  (duration: 187.083262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:11:18.944953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.297146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-332426-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-22T00:11:18.945501Z","caller":"traceutil/trace.go:171","msg":"trace[655306108] range","detail":"{range_begin:/registry/minions/multinode-332426-m03; range_end:; response_count:1; response_revision:623; }","duration":"148.883727ms","start":"2024-07-22T00:11:18.796608Z","end":"2024-07-22T00:11:18.945492Z","steps":["trace[655306108] 'agreement among raft nodes before linearized reading'  (duration: 148.26967ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:11:28.18203Z","caller":"traceutil/trace.go:171","msg":"trace[2143136036] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"216.550447ms","start":"2024-07-22T00:11:27.965451Z","end":"2024-07-22T00:11:28.182001Z","steps":["trace[2143136036] 'process raft request'  (duration: 216.27488ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:14:31.152652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T00:14:31.152799Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-332426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	{"level":"warn","ts":"2024-07-22T00:14:31.152895Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.152918Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.152982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:14:31.153041Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T00:14:31.2294Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce564ad586a3115","current-leader-member-id":"ce564ad586a3115"}
	{"level":"info","ts":"2024-07-22T00:14:31.231631Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:14:31.231857Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:14:31.231914Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-332426","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> etcd [d82aa1b89b93801811d20cdf64f0f91e160736eca23a9665b31342ed3d3505b2] <==
	{"level":"info","ts":"2024-07-22T00:16:07.113489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:16:07.113506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T00:16:07.113771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 switched to configuration voters=(929259593797349653)"}
	{"level":"info","ts":"2024-07-22T00:16:07.113835Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","added-peer-id":"ce564ad586a3115","added-peer-peer-urls":["https://192.168.39.67:2380"]}
	{"level":"info","ts":"2024-07-22T00:16:07.11396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:16:07.114003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:16:07.145014Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T00:16:07.145253Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce564ad586a3115","initial-advertise-peer-urls":["https://192.168.39.67:2380"],"listen-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.67:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:16:07.145291Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:16:07.148582Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:16:07.150349Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-07-22T00:16:08.183774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.18389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.18396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgPreVoteResp from ce564ad586a3115 at term 2"}
	{"level":"info","ts":"2024-07-22T00:16:08.184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgVoteResp from ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.184081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce564ad586a3115 elected leader ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-07-22T00:16:08.190181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:16:08.190147Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce564ad586a3115","local-member-attributes":"{Name:multinode-332426 ClientURLs:[https://192.168.39.67:2379]}","request-path":"/0/members/ce564ad586a3115/attributes","cluster-id":"429166af17098d53","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:16:08.191597Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:16:08.191871Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:16:08.191898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:16:08.192504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.67:2379"}
	{"level":"info","ts":"2024-07-22T00:16:08.193621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:20:15 up 11 min,  0 users,  load average: 0.09, 0.21, 0.14
	Linux multinode-332426 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [54bcf11731a98d85b4d61b01c20e0db73cdd9acf3e988a095ec21e7b7d3f4501] <==
	I0722 00:19:11.372510       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:19:21.376776       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:19:21.376803       1 main.go:299] handling current node
	I0722 00:19:21.376816       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:19:21.376821       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:19:31.380449       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:19:31.380496       1 main.go:299] handling current node
	I0722 00:19:31.380533       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:19:31.380538       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:19:41.380807       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:19:41.380919       1 main.go:299] handling current node
	I0722 00:19:41.380966       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:19:41.380976       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:19:51.371596       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:19:51.371699       1 main.go:299] handling current node
	I0722 00:19:51.371734       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:19:51.371780       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:20:01.371633       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:20:01.371796       1 main.go:299] handling current node
	I0722 00:20:01.371875       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:20:01.371988       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:20:11.371995       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:20:11.372113       1 main.go:299] handling current node
	I0722 00:20:11.372145       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:20:11.372167       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [be5af95309f160fbd607376dfcd5f22745fb5b1b77b9ad7cbfb631cd7a043fd7] <==
	I0722 00:13:48.362262       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:13:58.369602       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:13:58.369643       1 main.go:299] handling current node
	I0722 00:13:58.369658       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:13:58.369663       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:13:58.369791       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:13:58.369812       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:08.367262       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:08.367420       1 main.go:299] handling current node
	I0722 00:14:08.367449       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:08.367467       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:08.367628       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:08.367743       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:18.371260       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:18.371344       1 main.go:299] handling current node
	I0722 00:14:18.371362       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:18.371375       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:18.371541       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:18.371561       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	I0722 00:14:28.369552       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0722 00:14:28.369616       1 main.go:299] handling current node
	I0722 00:14:28.369641       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0722 00:14:28.369646       1 main.go:322] Node multinode-332426-m02 has CIDR [10.244.1.0/24] 
	I0722 00:14:28.369820       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0722 00:14:28.369842       1 main.go:322] Node multinode-332426-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ca1b020e36a140ceba7ae156489e3a9eeb54c7816a7bb4279159edd347584f8] <==
	I0722 00:16:09.465844       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 00:16:09.492056       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 00:16:09.492143       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 00:16:09.492168       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 00:16:09.514275       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 00:16:09.514345       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 00:16:09.514466       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 00:16:09.518514       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 00:16:09.518581       1 aggregator.go:165] initial CRD sync complete...
	I0722 00:16:09.518628       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 00:16:09.518638       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 00:16:09.518644       1 cache.go:39] Caches are synced for autoregister controller
	I0722 00:16:09.520439       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 00:16:09.525891       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 00:16:09.525949       1 policy_source.go:224] refreshing policies
	I0722 00:16:09.543571       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0722 00:16:09.563782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 00:16:10.400456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 00:16:11.312402       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:16:11.442029       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 00:16:11.456613       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:16:11.535686       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 00:16:11.542987       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 00:16:22.323741       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 00:16:22.472999       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cb8198ba979fc0e21f445978f932a560aa570ee62cc9e582148e16fc16bca8c7] <==
	W0722 00:14:31.177069       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177099       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177129       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177160       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177213       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177298       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177594       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177696       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177738       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177772       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177805       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177840       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177871       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177906       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177936       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.177969       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178444       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178484       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178517       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178544       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178572       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178603       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.178629       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.180118       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:14:31.180500       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0b655b503e2b59cfd4486c9b0eda01bd9a999f460f55c09798ad352e148806ea] <==
	I0722 00:10:02.123693       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0722 00:10:26.552550       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m02\" does not exist"
	I0722 00:10:26.631608       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m02" podCIDRs=["10.244.1.0/24"]
	I0722 00:10:27.128427       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-332426-m02"
	I0722 00:10:45.297632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:10:47.629604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.751926ms"
	I0722 00:10:47.655588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.924467ms"
	I0722 00:10:47.655756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.667µs"
	I0722 00:10:47.655870       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.277µs"
	I0722 00:10:50.772809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.621355ms"
	I0722 00:10:50.773547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.957µs"
	I0722 00:10:51.388112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488601ms"
	I0722 00:10:51.388204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.197µs"
	I0722 00:11:18.948522       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:11:18.950033       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:11:18.976584       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.2.0/24"]
	I0722 00:11:22.147650       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-332426-m03"
	I0722 00:11:38.885791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:06.592553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:07.533702       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:12:07.535172       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:12:07.556613       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.3.0/24"]
	I0722 00:12:25.978293       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:13:12.254865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.444355ms"
	I0722 00:13:12.254978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.231µs"
	
	
	==> kube-controller-manager [ccf2fa4343e40390af084045f5b500056976d69c67d290fd03e7bd83c2a4dc55] <==
	I0722 00:16:50.927834       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m02" podCIDRs=["10.244.1.0/24"]
	I0722 00:16:52.614523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.818µs"
	I0722 00:16:52.798150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.026µs"
	I0722 00:16:52.808681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.699µs"
	I0722 00:16:52.818577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.65µs"
	I0722 00:16:52.857502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.699µs"
	I0722 00:16:52.864572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.438µs"
	I0722 00:16:52.868892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.29µs"
	I0722 00:17:10.246496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:10.264598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.619µs"
	I0722 00:17:10.280736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.334µs"
	I0722 00:17:14.251651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.877948ms"
	I0722 00:17:14.251955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.046µs"
	I0722 00:17:28.125079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:29.181028       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-332426-m03\" does not exist"
	I0722 00:17:29.184048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:29.202723       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-332426-m03" podCIDRs=["10.244.2.0/24"]
	I0722 00:17:48.988193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:17:54.091092       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-332426-m02"
	I0722 00:18:32.273689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.974294ms"
	I0722 00:18:32.273793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.086µs"
	I0722 00:18:42.142644       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-q4dfh"
	I0722 00:18:42.164075       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-q4dfh"
	I0722 00:18:42.164157       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5szrb"
	I0722 00:18:42.189136       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5szrb"
	
	
	==> kube-proxy [45562da2aee19b5644bbde258e52e1a1003d8f48a83daa2c330a0f91ef2bd3cd] <==
	I0722 00:16:10.658862       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:16:10.699827       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0722 00:16:10.769776       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:16:10.769830       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:16:10.769848       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:16:10.773259       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:16:10.773525       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:16:10.773548       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:16:10.775597       1 config.go:192] "Starting service config controller"
	I0722 00:16:10.775630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:16:10.775662       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:16:10.775676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:16:10.776143       1 config.go:319] "Starting node config controller"
	I0722 00:16:10.776171       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:16:10.876304       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:16:10.876388       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:16:10.876395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [84be68af9419356a59ab0d5c0930c4f1968d66c611110c36e0909c80fbe30421] <==
	I0722 00:09:44.095752       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:09:44.114722       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0722 00:09:44.169548       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:09:44.169596       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:09:44.169637       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:09:44.175618       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:09:44.175814       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:09:44.175826       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:09:44.180130       1 config.go:192] "Starting service config controller"
	I0722 00:09:44.180240       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:09:44.180359       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:09:44.180459       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:09:44.181635       1 config.go:319] "Starting node config controller"
	I0722 00:09:44.181673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:09:44.281202       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:09:44.281386       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:09:44.281900       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d3a51dbfdeccbde8bebb1c9443df0cdd4d847fcc049d2fa977b25371d4672b9] <==
	I0722 00:16:07.559589       1 serving.go:380] Generated self-signed cert in-memory
	W0722 00:16:09.416533       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 00:16:09.416571       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:16:09.416581       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 00:16:09.416626       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 00:16:09.512197       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:16:09.512261       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:16:09.516918       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:16:09.516953       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:16:09.517767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:16:09.517849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:16:09.618516       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d1fe9fff883b00d2184b3e1a66d0556dea81f79a43cf2ae23e5f18c214b93a9b] <==
	E0722 00:09:26.584527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:26.584577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:09:26.584627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:09:26.584695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:09:26.584723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:09:26.584762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:09:26.584784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:09:26.584821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:09:26.584842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:09:26.586241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:26.586291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.508868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:27.508909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.638483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 00:09:27.638660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 00:09:27.657434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:09:27.657599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:09:27.772745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:09:27.772981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:09:27.777485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:09:27.777612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:09:28.024212       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:09:28.024268       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 00:09:30.676406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 00:14:31.151461       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927178    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6945ba2-29c0-406e-aa81-491a78d7f5b6-xtables-lock\") pod \"kindnet-8hmt4\" (UID: \"d6945ba2-29c0-406e-aa81-491a78d7f5b6\") " pod="kube-system/kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927393    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6945ba2-29c0-406e-aa81-491a78d7f5b6-cni-cfg\") pod \"kindnet-8hmt4\" (UID: \"d6945ba2-29c0-406e-aa81-491a78d7f5b6\") " pod="kube-system/kindnet-8hmt4"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: I0722 00:16:09.927479    3071 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d-xtables-lock\") pod \"kube-proxy-lj2fx\" (UID: \"5f7e3ea2-c65c-412d-9fe9-8cda0b7dd45d\") " pod="kube-system/kube-proxy-lj2fx"
	Jul 22 00:16:09 multinode-332426 kubelet[3071]: E0722 00:16:09.962576    3071 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-332426\" already exists" pod="kube-system/kube-apiserver-multinode-332426"
	Jul 22 00:16:19 multinode-332426 kubelet[3071]: I0722 00:16:19.205603    3071 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 22 00:17:05 multinode-332426 kubelet[3071]: E0722 00:17:05.882259    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:17:05 multinode-332426 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:18:05 multinode-332426 kubelet[3071]: E0722 00:18:05.883926    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:18:05 multinode-332426 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:18:05 multinode-332426 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:18:05 multinode-332426 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:18:05 multinode-332426 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:19:05 multinode-332426 kubelet[3071]: E0722 00:19:05.883536    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:19:05 multinode-332426 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:19:05 multinode-332426 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:19:05 multinode-332426 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:19:05 multinode-332426 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:20:05 multinode-332426 kubelet[3071]: E0722 00:20:05.883609    3071 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:20:05 multinode-332426 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:20:05 multinode-332426 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:20:05 multinode-332426 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:20:05 multinode-332426 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:20:14.945386   43555 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-5094/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-332426 -n multinode-332426
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-332426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                    
x
+
TestPreload (269.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-424632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0722 00:24:55.172896   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-424632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.880543242s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-424632 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-424632 image pull gcr.io/k8s-minikube/busybox: (2.834857864s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-424632
E0722 00:27:37.333491   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0722 00:27:54.282783   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-424632: exit status 82 (2m0.452111052s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-424632"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-424632 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-22 00:28:06.800253512 +0000 UTC m=+3805.508852238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-424632 -n test-preload-424632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-424632 -n test-preload-424632: exit status 3 (18.578014022s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:28:25.374918   46397 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0722 00:28:25.374938   46397 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-424632" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-424632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-424632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-424632: (1.080497802s)
--- FAIL: TestPreload (269.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.728462826s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-921436" primary control-plane node in "kubernetes-upgrade-921436" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:30:16.217170   47480 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:30:16.217279   47480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:16.217295   47480 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:16.217303   47480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:16.217674   47480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:30:16.219624   47480 out.go:298] Setting JSON to false
	I0722 00:30:16.220751   47480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4360,"bootTime":1721603856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:30:16.220839   47480 start.go:139] virtualization: kvm guest
	I0722 00:30:16.222945   47480 out.go:177] * [kubernetes-upgrade-921436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:30:16.224322   47480 notify.go:220] Checking for updates...
	I0722 00:30:16.225703   47480 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:30:16.228410   47480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:30:16.230351   47480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:30:16.232210   47480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:30:16.234205   47480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:30:16.236356   47480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:30:16.237566   47480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:30:16.278507   47480 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 00:30:16.279670   47480 start.go:297] selected driver: kvm2
	I0722 00:30:16.279713   47480 start.go:901] validating driver "kvm2" against <nil>
	I0722 00:30:16.279737   47480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:30:16.280676   47480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:30:16.280757   47480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:30:16.297726   47480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:30:16.297805   47480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:30:16.298034   47480 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 00:30:16.298060   47480 cni.go:84] Creating CNI manager for ""
	I0722 00:30:16.298072   47480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:30:16.298085   47480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 00:30:16.298239   47480 start.go:340] cluster config:
	{Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:30:16.298403   47480 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:30:16.300022   47480 out.go:177] * Starting "kubernetes-upgrade-921436" primary control-plane node in "kubernetes-upgrade-921436" cluster
	I0722 00:30:16.301057   47480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:30:16.301106   47480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 00:30:16.301121   47480 cache.go:56] Caching tarball of preloaded images
	I0722 00:30:16.301215   47480 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:30:16.301228   47480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 00:30:16.301636   47480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/config.json ...
	I0722 00:30:16.301661   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/config.json: {Name:mk835c284575af09eb53b08e04634a7febf2fa12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:16.301825   47480 start.go:360] acquireMachinesLock for kubernetes-upgrade-921436: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:30:16.301877   47480 start.go:364] duration metric: took 26.897µs to acquireMachinesLock for "kubernetes-upgrade-921436"
	I0722 00:30:16.301897   47480 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:30:16.301996   47480 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 00:30:16.303341   47480 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:30:16.303496   47480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:30:16.303531   47480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:30:16.319205   47480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0722 00:30:16.319692   47480 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:30:16.320325   47480 main.go:141] libmachine: Using API Version  1
	I0722 00:30:16.320343   47480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:30:16.320755   47480 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:30:16.321087   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetMachineName
	I0722 00:30:16.321266   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:16.321453   47480 start.go:159] libmachine.API.Create for "kubernetes-upgrade-921436" (driver="kvm2")
	I0722 00:30:16.321496   47480 client.go:168] LocalClient.Create starting
	I0722 00:30:16.321535   47480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0722 00:30:16.321576   47480 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:16.321602   47480 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:16.321706   47480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0722 00:30:16.321733   47480 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:16.321751   47480 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:16.321776   47480 main.go:141] libmachine: Running pre-create checks...
	I0722 00:30:16.321789   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .PreCreateCheck
	I0722 00:30:16.322090   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetConfigRaw
	I0722 00:30:16.322513   47480 main.go:141] libmachine: Creating machine...
	I0722 00:30:16.322530   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .Create
	I0722 00:30:16.322693   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Creating KVM machine...
	I0722 00:30:16.323946   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found existing default KVM network
	I0722 00:30:16.324803   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:16.324639   47556 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204610}
	I0722 00:30:16.324845   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | created network xml: 
	I0722 00:30:16.324863   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | <network>
	I0722 00:30:16.324879   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   <name>mk-kubernetes-upgrade-921436</name>
	I0722 00:30:16.324887   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   <dns enable='no'/>
	I0722 00:30:16.324899   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   
	I0722 00:30:16.324912   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 00:30:16.324924   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |     <dhcp>
	I0722 00:30:16.324940   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 00:30:16.324954   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |     </dhcp>
	I0722 00:30:16.324972   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   </ip>
	I0722 00:30:16.324984   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG |   
	I0722 00:30:16.324999   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | </network>
	I0722 00:30:16.325009   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | 
	I0722 00:30:16.330330   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | trying to create private KVM network mk-kubernetes-upgrade-921436 192.168.39.0/24...
	I0722 00:30:16.403259   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | private KVM network mk-kubernetes-upgrade-921436 192.168.39.0/24 created
	I0722 00:30:16.403289   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:16.403184   47556 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:30:16.403303   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436 ...
	I0722 00:30:16.403321   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 00:30:16.403411   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:30:16.644165   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:16.644061   47556 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa...
	I0722 00:30:16.899001   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:16.898916   47556 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/kubernetes-upgrade-921436.rawdisk...
	I0722 00:30:16.899038   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Writing magic tar header
	I0722 00:30:16.899062   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Writing SSH key tar header
	I0722 00:30:16.899746   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:16.899665   47556 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436 ...
	I0722 00:30:16.899791   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436
	I0722 00:30:16.899819   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436 (perms=drwx------)
	I0722 00:30:16.899831   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0722 00:30:16.899843   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0722 00:30:16.899855   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0722 00:30:16.899864   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0722 00:30:16.899876   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 00:30:16.899889   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 00:30:16.899904   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:30:16.899925   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0722 00:30:16.899934   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Creating domain...
	I0722 00:30:16.899944   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 00:30:16.899953   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home/jenkins
	I0722 00:30:16.899967   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Checking permissions on dir: /home
	I0722 00:30:16.899981   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Skipping /home - not owner
	I0722 00:30:16.901025   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) define libvirt domain using xml: 
	I0722 00:30:16.901050   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) <domain type='kvm'>
	I0722 00:30:16.901061   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <name>kubernetes-upgrade-921436</name>
	I0722 00:30:16.901070   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <memory unit='MiB'>2200</memory>
	I0722 00:30:16.901079   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <vcpu>2</vcpu>
	I0722 00:30:16.901088   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <features>
	I0722 00:30:16.901100   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <acpi/>
	I0722 00:30:16.901109   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <apic/>
	I0722 00:30:16.901118   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <pae/>
	I0722 00:30:16.901132   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     
	I0722 00:30:16.901150   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   </features>
	I0722 00:30:16.901160   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <cpu mode='host-passthrough'>
	I0722 00:30:16.901168   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   
	I0722 00:30:16.901186   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   </cpu>
	I0722 00:30:16.901195   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <os>
	I0722 00:30:16.901204   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <type>hvm</type>
	I0722 00:30:16.901210   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <boot dev='cdrom'/>
	I0722 00:30:16.901216   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <boot dev='hd'/>
	I0722 00:30:16.901226   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <bootmenu enable='no'/>
	I0722 00:30:16.901257   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   </os>
	I0722 00:30:16.901278   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   <devices>
	I0722 00:30:16.901290   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <disk type='file' device='cdrom'>
	I0722 00:30:16.901314   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/boot2docker.iso'/>
	I0722 00:30:16.901328   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <target dev='hdc' bus='scsi'/>
	I0722 00:30:16.901339   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <readonly/>
	I0722 00:30:16.901356   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </disk>
	I0722 00:30:16.901369   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <disk type='file' device='disk'>
	I0722 00:30:16.901399   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 00:30:16.901424   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/kubernetes-upgrade-921436.rawdisk'/>
	I0722 00:30:16.901443   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <target dev='hda' bus='virtio'/>
	I0722 00:30:16.901454   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </disk>
	I0722 00:30:16.901464   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <interface type='network'>
	I0722 00:30:16.901477   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <source network='mk-kubernetes-upgrade-921436'/>
	I0722 00:30:16.901487   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <model type='virtio'/>
	I0722 00:30:16.901498   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </interface>
	I0722 00:30:16.901520   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <interface type='network'>
	I0722 00:30:16.901532   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <source network='default'/>
	I0722 00:30:16.901542   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <model type='virtio'/>
	I0722 00:30:16.901550   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </interface>
	I0722 00:30:16.901557   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <serial type='pty'>
	I0722 00:30:16.901570   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <target port='0'/>
	I0722 00:30:16.901577   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </serial>
	I0722 00:30:16.901595   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <console type='pty'>
	I0722 00:30:16.901614   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <target type='serial' port='0'/>
	I0722 00:30:16.901627   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </console>
	I0722 00:30:16.901638   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     <rng model='virtio'>
	I0722 00:30:16.901653   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)       <backend model='random'>/dev/random</backend>
	I0722 00:30:16.901663   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     </rng>
	I0722 00:30:16.901678   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     
	I0722 00:30:16.901689   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)     
	I0722 00:30:16.901699   47480 main.go:141] libmachine: (kubernetes-upgrade-921436)   </devices>
	I0722 00:30:16.901712   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) </domain>
	I0722 00:30:16.901735   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) 
	I0722 00:30:16.905498   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:e6:10:80 in network default
	I0722 00:30:16.906101   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Ensuring networks are active...
	I0722 00:30:16.906122   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:16.906783   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Ensuring network default is active
	I0722 00:30:16.907092   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Ensuring network mk-kubernetes-upgrade-921436 is active
	I0722 00:30:16.907848   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Getting domain xml...
	I0722 00:30:16.908583   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Creating domain...
	I0722 00:30:18.243684   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Waiting to get IP...
	I0722 00:30:18.244482   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.244883   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.244919   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:18.244868   47556 retry.go:31] will retry after 257.468925ms: waiting for machine to come up
	I0722 00:30:18.504524   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.505119   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.505146   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:18.505032   47556 retry.go:31] will retry after 275.035162ms: waiting for machine to come up
	I0722 00:30:18.781190   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.781604   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:18.781633   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:18.781569   47556 retry.go:31] will retry after 348.21433ms: waiting for machine to come up
	I0722 00:30:19.131010   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:19.131476   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:19.131514   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:19.131452   47556 retry.go:31] will retry after 522.184306ms: waiting for machine to come up
	I0722 00:30:19.655159   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:19.655616   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:19.655639   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:19.655575   47556 retry.go:31] will retry after 695.109749ms: waiting for machine to come up
	I0722 00:30:20.352454   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:20.352954   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:20.352980   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:20.352914   47556 retry.go:31] will retry after 793.805218ms: waiting for machine to come up
	I0722 00:30:21.148627   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:21.149060   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:21.149113   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:21.149016   47556 retry.go:31] will retry after 717.902146ms: waiting for machine to come up
	I0722 00:30:21.868205   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:21.868604   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:21.868631   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:21.868574   47556 retry.go:31] will retry after 1.292750725s: waiting for machine to come up
	I0722 00:30:23.162486   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:23.162893   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:23.162919   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:23.162843   47556 retry.go:31] will retry after 1.80538039s: waiting for machine to come up
	I0722 00:30:24.970805   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:24.971123   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:24.971145   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:24.971078   47556 retry.go:31] will retry after 1.590035769s: waiting for machine to come up
	I0722 00:30:26.562816   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:26.563279   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:26.563302   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:26.563247   47556 retry.go:31] will retry after 1.845052072s: waiting for machine to come up
	I0722 00:30:28.411384   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:28.411800   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:28.411827   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:28.411757   47556 retry.go:31] will retry after 2.898729917s: waiting for machine to come up
	I0722 00:30:31.313761   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:31.314171   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:31.314194   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:31.314129   47556 retry.go:31] will retry after 4.112360822s: waiting for machine to come up
	I0722 00:30:35.430784   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:35.431133   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find current IP address of domain kubernetes-upgrade-921436 in network mk-kubernetes-upgrade-921436
	I0722 00:30:35.431153   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | I0722 00:30:35.431098   47556 retry.go:31] will retry after 3.744787792s: waiting for machine to come up
	I0722 00:30:39.178789   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.179284   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Found IP for machine: 192.168.39.95
	I0722 00:30:39.179301   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has current primary IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.179310   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Reserving static IP address...
	I0722 00:30:39.179676   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-921436", mac: "52:54:00:f8:e7:6a", ip: "192.168.39.95"} in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.252080   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Getting to WaitForSSH function...
	I0722 00:30:39.252104   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Reserved static IP address: 192.168.39.95
	I0722 00:30:39.252118   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Waiting for SSH to be available...
	I0722 00:30:39.254398   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.254859   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.254909   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.255053   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Using SSH client type: external
	I0722 00:30:39.255073   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa (-rw-------)
	I0722 00:30:39.255106   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:30:39.255120   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | About to run SSH command:
	I0722 00:30:39.255135   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | exit 0
	I0722 00:30:39.382359   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | SSH cmd err, output: <nil>: 
	I0722 00:30:39.382684   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) KVM machine creation complete!
	I0722 00:30:39.383014   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetConfigRaw
	I0722 00:30:39.383821   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:39.384037   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:39.384244   47480 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 00:30:39.384259   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetState
	I0722 00:30:39.385784   47480 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 00:30:39.385803   47480 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 00:30:39.385811   47480 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 00:30:39.385820   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:39.387945   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.388266   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.388290   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.388388   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:39.388560   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.388725   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.388857   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:39.388995   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:39.389213   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:39.389227   47480 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 00:30:39.497825   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:30:39.497855   47480 main.go:141] libmachine: Detecting the provisioner...
	I0722 00:30:39.497863   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:39.500619   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.500949   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.500976   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.501177   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:39.501378   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.501539   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.501682   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:39.501829   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:39.502004   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:39.502017   47480 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 00:30:39.615046   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 00:30:39.615142   47480 main.go:141] libmachine: found compatible host: buildroot
	I0722 00:30:39.615158   47480 main.go:141] libmachine: Provisioning with buildroot...
	I0722 00:30:39.615180   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetMachineName
	I0722 00:30:39.615456   47480 buildroot.go:166] provisioning hostname "kubernetes-upgrade-921436"
	I0722 00:30:39.615479   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetMachineName
	I0722 00:30:39.615687   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:39.618011   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.618314   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.618347   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.618447   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:39.618653   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.618819   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.618955   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:39.619117   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:39.619316   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:39.619330   47480 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-921436 && echo "kubernetes-upgrade-921436" | sudo tee /etc/hostname
	I0722 00:30:39.743816   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-921436
	
	I0722 00:30:39.743852   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:39.746344   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.746642   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.746677   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.746850   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:39.747079   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.747246   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:39.747390   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:39.747553   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:39.747779   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:39.747799   47480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-921436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-921436/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-921436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:30:39.866849   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:30:39.866877   47480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:30:39.866913   47480 buildroot.go:174] setting up certificates
	I0722 00:30:39.866924   47480 provision.go:84] configureAuth start
	I0722 00:30:39.866933   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetMachineName
	I0722 00:30:39.867219   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:30:39.869864   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.870212   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.870234   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.870379   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:39.872399   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.872662   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:39.872704   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:39.872816   47480 provision.go:143] copyHostCerts
	I0722 00:30:39.872875   47480 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:30:39.872888   47480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:30:39.872971   47480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:30:39.873103   47480 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:30:39.873116   47480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:30:39.873156   47480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:30:39.873242   47480 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:30:39.873252   47480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:30:39.873283   47480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:30:39.873364   47480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-921436 san=[127.0.0.1 192.168.39.95 kubernetes-upgrade-921436 localhost minikube]
	I0722 00:30:40.061271   47480 provision.go:177] copyRemoteCerts
	I0722 00:30:40.061331   47480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:30:40.061355   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.063789   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.064249   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.064270   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.064472   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.064659   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.064853   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.064996   47480 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:30:40.148071   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:30:40.170533   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0722 00:30:40.192087   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:30:40.213725   47480 provision.go:87] duration metric: took 346.780641ms to configureAuth
	I0722 00:30:40.213756   47480 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:30:40.213926   47480 config.go:182] Loaded profile config "kubernetes-upgrade-921436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:30:40.214014   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.216537   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.216868   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.216893   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.217080   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.217270   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.217440   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.217600   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.217752   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:40.217902   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:40.217924   47480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:30:40.476258   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:30:40.476284   47480 main.go:141] libmachine: Checking connection to Docker...
	I0722 00:30:40.476296   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetURL
	I0722 00:30:40.477619   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | Using libvirt version 6000000
	I0722 00:30:40.479862   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.480192   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.480215   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.480421   47480 main.go:141] libmachine: Docker is up and running!
	I0722 00:30:40.480435   47480 main.go:141] libmachine: Reticulating splines...
	I0722 00:30:40.480441   47480 client.go:171] duration metric: took 24.158934437s to LocalClient.Create
	I0722 00:30:40.480464   47480 start.go:167] duration metric: took 24.159012585s to libmachine.API.Create "kubernetes-upgrade-921436"
	I0722 00:30:40.480477   47480 start.go:293] postStartSetup for "kubernetes-upgrade-921436" (driver="kvm2")
	I0722 00:30:40.480491   47480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:30:40.480527   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:40.480767   47480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:30:40.480789   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.482914   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.483247   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.483281   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.483445   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.483584   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.483779   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.483892   47480 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:30:40.568543   47480 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:30:40.572746   47480 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:30:40.572776   47480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:30:40.572855   47480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:30:40.572955   47480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:30:40.573080   47480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:30:40.582039   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:30:40.604132   47480 start.go:296] duration metric: took 123.639602ms for postStartSetup
	I0722 00:30:40.604198   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetConfigRaw
	I0722 00:30:40.604731   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:30:40.607146   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.607589   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.607637   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.607830   47480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/config.json ...
	I0722 00:30:40.608013   47480 start.go:128] duration metric: took 24.306008682s to createHost
	I0722 00:30:40.608036   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.610270   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.610565   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.610616   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.610771   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.610934   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.611097   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.611214   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.611371   47480 main.go:141] libmachine: Using SSH client type: native
	I0722 00:30:40.611573   47480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:30:40.611598   47480 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 00:30:40.723110   47480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608240.700185186
	
	I0722 00:30:40.723135   47480 fix.go:216] guest clock: 1721608240.700185186
	I0722 00:30:40.723142   47480 fix.go:229] Guest: 2024-07-22 00:30:40.700185186 +0000 UTC Remote: 2024-07-22 00:30:40.6080269 +0000 UTC m=+24.434452402 (delta=92.158286ms)
	I0722 00:30:40.723159   47480 fix.go:200] guest clock delta is within tolerance: 92.158286ms
	I0722 00:30:40.723164   47480 start.go:83] releasing machines lock for "kubernetes-upgrade-921436", held for 24.421277341s
	I0722 00:30:40.723190   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:40.723486   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:30:40.726511   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.726868   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.726899   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.727076   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:40.727578   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:40.727765   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:30:40.727861   47480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:30:40.727906   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.727986   47480 ssh_runner.go:195] Run: cat /version.json
	I0722 00:30:40.728000   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:30:40.730563   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.730707   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.730956   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.730981   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.731006   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:40.731029   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:40.731131   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.731261   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:30:40.731322   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.731423   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:30:40.731498   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.731556   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:30:40.731638   47480 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:30:40.731696   47480 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:30:40.856738   47480 ssh_runner.go:195] Run: systemctl --version
	I0722 00:30:40.862589   47480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:30:41.015368   47480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:30:41.021312   47480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:30:41.021409   47480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:30:41.037211   47480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:30:41.037238   47480 start.go:495] detecting cgroup driver to use...
	I0722 00:30:41.037306   47480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:30:41.054291   47480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:30:41.068040   47480 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:30:41.068101   47480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:30:41.081684   47480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:30:41.095467   47480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:30:41.234199   47480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:30:41.389251   47480 docker.go:233] disabling docker service ...
	I0722 00:30:41.389334   47480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:30:41.402903   47480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:30:41.416059   47480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:30:41.526975   47480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:30:41.643365   47480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:30:41.655907   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:30:41.675197   47480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:30:41.675258   47480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:30:41.684950   47480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:30:41.685014   47480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:30:41.694545   47480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:30:41.704158   47480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:30:41.713358   47480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:30:41.722874   47480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:30:41.731566   47480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:30:41.731624   47480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:30:41.743332   47480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:30:41.754940   47480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:30:41.863582   47480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:30:42.008154   47480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:30:42.008215   47480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:30:42.013449   47480 start.go:563] Will wait 60s for crictl version
	I0722 00:30:42.013502   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:42.017772   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:30:42.054253   47480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:30:42.054337   47480 ssh_runner.go:195] Run: crio --version
	I0722 00:30:42.080965   47480 ssh_runner.go:195] Run: crio --version
	I0722 00:30:42.109156   47480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:30:42.110218   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:30:42.113011   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:42.113392   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:30:30 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:30:42.113422   47480 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:30:42.113606   47480 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:30:42.117375   47480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:30:42.129530   47480 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:30:42.129627   47480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:30:42.129682   47480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:30:42.162594   47480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:30:42.162694   47480 ssh_runner.go:195] Run: which lz4
	I0722 00:30:42.166860   47480 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 00:30:42.170806   47480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:30:42.170843   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:30:43.666050   47480 crio.go:462] duration metric: took 1.499239806s to copy over tarball
	I0722 00:30:43.666121   47480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:30:46.265285   47480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.599136768s)
	I0722 00:30:46.265312   47480 crio.go:469] duration metric: took 2.599233986s to extract the tarball
	I0722 00:30:46.265325   47480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:30:46.307277   47480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:30:46.353170   47480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:30:46.353197   47480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:30:46.353265   47480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:30:46.353270   47480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:30:46.353267   47480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:30:46.353353   47480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:30:46.353376   47480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:30:46.353441   47480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:30:46.353510   47480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:30:46.353357   47480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:30:46.354935   47480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:30:46.356272   47480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:30:46.356307   47480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:30:46.356340   47480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:30:46.356361   47480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:30:46.356375   47480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:30:46.356678   47480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:30:46.357015   47480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:30:46.584305   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:30:46.585118   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:30:46.586931   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:30:46.595825   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:30:46.601242   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:30:46.602730   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:30:46.663939   47480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:30:46.663997   47480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:30:46.664041   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.666245   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:30:46.671349   47480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:30:46.671390   47480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:30:46.671440   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.719014   47480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:30:46.719059   47480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:30:46.719081   47480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:30:46.719111   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.719118   47480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:30:46.719153   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.734211   47480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:30:46.734259   47480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:30:46.734307   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.734315   47480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:30:46.734338   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:30:46.734353   47480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:30:46.734395   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.759251   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:30:46.759327   47480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:30:46.759357   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:30:46.759375   47480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:30:46.759335   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:30:46.759412   47480 ssh_runner.go:195] Run: which crictl
	I0722 00:30:46.759432   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:30:46.802097   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:30:46.802156   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:30:46.868324   47480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:30:46.868373   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:30:46.868436   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:30:46.868441   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:30:46.868522   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:30:46.880823   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:30:46.905511   47480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:30:47.217520   47480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:30:47.354804   47480 cache_images.go:92] duration metric: took 1.001592323s to LoadCachedImages
	W0722 00:30:47.354880   47480 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0722 00:30:47.354892   47480 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.20.0 crio true true} ...
	I0722 00:30:47.355018   47480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-921436 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:30:47.355101   47480 ssh_runner.go:195] Run: crio config
	I0722 00:30:47.403531   47480 cni.go:84] Creating CNI manager for ""
	I0722 00:30:47.403554   47480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:30:47.403565   47480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:30:47.403590   47480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-921436 NodeName:kubernetes-upgrade-921436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:30:47.403781   47480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-921436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:30:47.403851   47480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:30:47.413784   47480 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:30:47.413849   47480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:30:47.423245   47480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0722 00:30:47.439770   47480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:30:47.455457   47480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:30:47.470709   47480 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0722 00:30:47.474354   47480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:30:47.486408   47480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:30:47.623961   47480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:30:47.649700   47480 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436 for IP: 192.168.39.95
	I0722 00:30:47.649724   47480 certs.go:194] generating shared ca certs ...
	I0722 00:30:47.649744   47480 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.649926   47480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:30:47.649980   47480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:30:47.649992   47480 certs.go:256] generating profile certs ...
	I0722 00:30:47.650064   47480 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.key
	I0722 00:30:47.650084   47480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.crt with IP's: []
	I0722 00:30:47.724791   47480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.crt ...
	I0722 00:30:47.724822   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.crt: {Name:mkcb1c2ea1d3c4fdc6477211ee0507b873e0d0c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.725020   47480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.key ...
	I0722 00:30:47.725037   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.key: {Name:mkf71dea8d9b80caceccc6b255a3695f7f45b62a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.725167   47480 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key.4b3b72e2
	I0722 00:30:47.725193   47480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt.4b3b72e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95]
	I0722 00:30:47.863074   47480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt.4b3b72e2 ...
	I0722 00:30:47.863106   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt.4b3b72e2: {Name:mk5ec47837e2bde97e63075696df4ca27ea15bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.863299   47480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key.4b3b72e2 ...
	I0722 00:30:47.863316   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key.4b3b72e2: {Name:mkd71b84f1c217fafc2697a5f9a89394f4bf85be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.863422   47480 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt.4b3b72e2 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt
	I0722 00:30:47.863527   47480 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key.4b3b72e2 -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key
	I0722 00:30:47.863597   47480 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key
	I0722 00:30:47.863615   47480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.crt with IP's: []
	I0722 00:30:47.986103   47480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.crt ...
	I0722 00:30:47.986131   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.crt: {Name:mkfe03e8e344e2615f9b69d6bd130f06827626a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.986313   47480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key ...
	I0722 00:30:47.986328   47480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key: {Name:mk796daf57782304df5593fb3378177980d224d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:47.986551   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:30:47.986589   47480 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:30:47.986598   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:30:47.986641   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:30:47.986666   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:30:47.986691   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:30:47.986730   47480 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:30:47.987275   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:30:48.012439   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:30:48.037754   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:30:48.062402   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:30:48.084732   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 00:30:48.111792   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:30:48.139458   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:30:48.166923   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:30:48.191708   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:30:48.213756   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:30:48.235700   47480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:30:48.257735   47480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:30:48.273396   47480 ssh_runner.go:195] Run: openssl version
	I0722 00:30:48.279274   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:30:48.289843   47480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:30:48.294170   47480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:30:48.294236   47480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:30:48.300008   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:30:48.310599   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:30:48.321241   47480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:30:48.325494   47480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:30:48.325553   47480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:30:48.331272   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:30:48.341686   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:30:48.351912   47480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:30:48.356098   47480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:30:48.356149   47480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:30:48.361646   47480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:30:48.371880   47480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:30:48.375662   47480 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:30:48.375734   47480 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:30:48.375829   47480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:30:48.375887   47480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:30:48.418166   47480 cri.go:89] found id: ""
	I0722 00:30:48.418233   47480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:30:48.427974   47480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:30:48.437213   47480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:30:48.448664   47480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:30:48.448682   47480 kubeadm.go:157] found existing configuration files:
	
	I0722 00:30:48.448730   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:30:48.457123   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:30:48.457187   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:30:48.472039   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:30:48.480819   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:30:48.480884   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:30:48.491157   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:30:48.502186   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:30:48.502247   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:30:48.514997   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:30:48.525309   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:30:48.525372   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:30:48.535015   47480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:30:48.655417   47480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:30:48.655505   47480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:30:48.799429   47480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:30:48.799627   47480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:30:48.799783   47480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:30:48.966643   47480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:30:49.174298   47480 out.go:204]   - Generating certificates and keys ...
	I0722 00:30:49.174444   47480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:30:49.174568   47480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:30:49.174699   47480 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:30:49.309483   47480 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:30:49.386231   47480 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:30:49.459082   47480 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:30:49.645864   47480 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:30:49.646106   47480 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0722 00:30:49.985124   47480 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:30:49.985333   47480 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0722 00:30:50.042161   47480 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:30:50.144562   47480 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:30:50.320866   47480 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:30:50.321117   47480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:30:50.630271   47480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:30:50.858451   47480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:30:50.978971   47480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:30:51.386106   47480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:30:51.411155   47480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:30:51.412217   47480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:30:51.412288   47480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:30:51.537957   47480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:30:51.539653   47480 out.go:204]   - Booting up control plane ...
	I0722 00:30:51.539760   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:30:51.549754   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:30:51.551885   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:30:51.553588   47480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:30:51.560396   47480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:31:31.555259   47480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:31:31.556017   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:31:31.556297   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:31:36.556385   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:31:36.556601   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:31:46.555652   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:31:46.555893   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:32:06.555228   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:32:06.555518   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:32:46.557017   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:32:46.557402   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:32:46.557415   47480 kubeadm.go:310] 
	I0722 00:32:46.557525   47480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:32:46.557670   47480 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:32:46.557684   47480 kubeadm.go:310] 
	I0722 00:32:46.557752   47480 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:32:46.557829   47480 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:32:46.558011   47480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:32:46.558025   47480 kubeadm.go:310] 
	I0722 00:32:46.558252   47480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:32:46.558331   47480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:32:46.558417   47480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:32:46.558428   47480 kubeadm.go:310] 
	I0722 00:32:46.558848   47480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:32:46.559045   47480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:32:46.559057   47480 kubeadm.go:310] 
	I0722 00:32:46.559306   47480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:32:46.559543   47480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:32:46.559731   47480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:32:46.559935   47480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:32:46.559966   47480 kubeadm.go:310] 
	I0722 00:32:46.560830   47480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:32:46.560930   47480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0722 00:32:46.561166   47480 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-921436 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:32:46.561224   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:32:46.561524   47480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:32:47.701241   47480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.13998881s)
	I0722 00:32:47.701304   47480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:32:47.715482   47480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:32:47.725180   47480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:32:47.725202   47480 kubeadm.go:157] found existing configuration files:
	
	I0722 00:32:47.725255   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:32:47.734393   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:32:47.734457   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:32:47.742799   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:32:47.751126   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:32:47.751181   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:32:47.759680   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:32:47.768047   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:32:47.768100   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:32:47.776981   47480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:32:47.785394   47480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:32:47.785456   47480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:32:47.794034   47480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:32:47.864530   47480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:32:47.864627   47480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:32:48.001203   47480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:32:48.001373   47480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:32:48.001504   47480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:32:48.178562   47480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:32:48.180538   47480 out.go:204]   - Generating certificates and keys ...
	I0722 00:32:48.180640   47480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:32:48.180714   47480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:32:48.180827   47480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:32:48.180935   47480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:32:48.181032   47480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:32:48.181135   47480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:32:48.181524   47480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:32:48.182151   47480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:32:48.182853   47480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:32:48.183603   47480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:32:48.183707   47480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:32:48.183788   47480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:32:48.599205   47480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:32:48.701518   47480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:32:48.959412   47480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:32:49.117529   47480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:32:49.132538   47480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:32:49.134576   47480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:32:49.134781   47480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:32:49.273954   47480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:32:49.276166   47480 out.go:204]   - Booting up control plane ...
	I0722 00:32:49.276290   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:32:49.285535   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:32:49.287653   47480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:32:49.288721   47480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:32:49.291915   47480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:33:29.295412   47480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:33:29.295543   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:33:29.295791   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:33:34.296492   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:33:34.296722   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:33:44.297420   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:33:44.297664   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:34:04.296270   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:34:04.296485   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:34:44.295780   47480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:34:44.296004   47480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:34:44.296029   47480 kubeadm.go:310] 
	I0722 00:34:44.296083   47480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:34:44.296135   47480 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:34:44.296159   47480 kubeadm.go:310] 
	I0722 00:34:44.296207   47480 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:34:44.296269   47480 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:34:44.296422   47480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:34:44.296434   47480 kubeadm.go:310] 
	I0722 00:34:44.296586   47480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:34:44.296620   47480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:34:44.296648   47480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:34:44.296655   47480 kubeadm.go:310] 
	I0722 00:34:44.296737   47480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:34:44.296852   47480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:34:44.296867   47480 kubeadm.go:310] 
	I0722 00:34:44.297007   47480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:34:44.297118   47480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:34:44.297216   47480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:34:44.297310   47480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:34:44.297320   47480 kubeadm.go:310] 
	I0722 00:34:44.297817   47480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:34:44.297931   47480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:34:44.298056   47480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:34:44.298091   47480 kubeadm.go:394] duration metric: took 3m55.922361314s to StartCluster
	I0722 00:34:44.298151   47480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:34:44.298211   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:34:44.341211   47480 cri.go:89] found id: ""
	I0722 00:34:44.341237   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.341247   47480 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:34:44.341268   47480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:34:44.341342   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:34:44.374799   47480 cri.go:89] found id: ""
	I0722 00:34:44.374829   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.374837   47480 logs.go:278] No container was found matching "etcd"
	I0722 00:34:44.374855   47480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:34:44.374917   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:34:44.408757   47480 cri.go:89] found id: ""
	I0722 00:34:44.408788   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.408799   47480 logs.go:278] No container was found matching "coredns"
	I0722 00:34:44.408806   47480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:34:44.408877   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:34:44.442763   47480 cri.go:89] found id: ""
	I0722 00:34:44.442786   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.442794   47480 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:34:44.442799   47480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:34:44.442866   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:34:44.477164   47480 cri.go:89] found id: ""
	I0722 00:34:44.477198   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.477206   47480 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:34:44.477212   47480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:34:44.477281   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:34:44.509643   47480 cri.go:89] found id: ""
	I0722 00:34:44.509676   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.509685   47480 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:34:44.509691   47480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:34:44.509759   47480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:34:44.542097   47480 cri.go:89] found id: ""
	I0722 00:34:44.542121   47480 logs.go:276] 0 containers: []
	W0722 00:34:44.542130   47480 logs.go:278] No container was found matching "kindnet"
	I0722 00:34:44.542139   47480 logs.go:123] Gathering logs for kubelet ...
	I0722 00:34:44.542154   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:34:44.597706   47480 logs.go:123] Gathering logs for dmesg ...
	I0722 00:34:44.597740   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:34:44.610665   47480 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:34:44.610695   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:34:44.748519   47480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:34:44.748543   47480 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:34:44.748564   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:34:44.847478   47480 logs.go:123] Gathering logs for container status ...
	I0722 00:34:44.847512   47480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0722 00:34:44.885662   47480 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:34:44.885711   47480 out.go:239] * 
	* 
	W0722 00:34:44.885778   47480 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:34:44.885809   47480 out.go:239] * 
	* 
	W0722 00:34:44.886765   47480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:34:44.890341   47480 out.go:177] 
	W0722 00:34:44.891584   47480 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:34:44.891657   47480 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:34:44.891686   47480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:34:44.893190   47480 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-921436
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-921436: (1.322419186s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921436 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-921436 status --format={{.Host}}: exit status 7 (61.683333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0722 00:34:55.172368   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.57510976s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-921436 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.216344ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-921436
	    minikube start -p kubernetes-upgrade-921436 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9214362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-921436 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921436 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.430066957s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-22 00:36:34.488455594 +0000 UTC m=+4313.197054301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-921436 -n kubernetes-upgrade-921436
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-921436 logs -n 25: (1.599983643s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-666395              | cert-options-666395       | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	| start   | -p stopped-upgrade-897070           | minikube                  | jenkins | v1.26.0 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-897769              | offline-crio-897769       | jenkins | v1.33.1 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:32 UTC |
	| start   | -p running-upgrade-012741           | minikube                  | jenkins | v1.26.0 | 22 Jul 24 00:32 UTC | 22 Jul 24 00:34 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-897070 stop         | minikube                  | jenkins | v1.26.0 | 22 Jul 24 00:33 UTC | 22 Jul 24 00:33 UTC |
	| start   | -p stopped-upgrade-897070           | stopped-upgrade-897070    | jenkins | v1.33.1 | 22 Jul 24 00:33 UTC | 22 Jul 24 00:34 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-012741           | running-upgrade-012741    | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:35 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-897070           | stopped-upgrade-897070    | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	| start   | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC |                     |
	|         | --no-kubernetes                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20           |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:35 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-921436        | kubernetes-upgrade-921436 | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:34 UTC |
	| start   | -p kubernetes-upgrade-921436        | kubernetes-upgrade-921436 | jenkins | v1.33.1 | 22 Jul 24 00:34 UTC | 22 Jul 24 00:35 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-576705           | cert-expiration-576705    | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h             |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	| start   | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-921436        | kubernetes-upgrade-921436 | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-921436        | kubernetes-upgrade-921436 | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:36 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-012741           | running-upgrade-012741    | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	| start   | -p pause-998383 --memory=2048       | pause-998383              | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC |                     |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-576705           | cert-expiration-576705    | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	| ssh     | -p NoKubernetes-302969 sudo         | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC | 22 Jul 24 00:35 UTC |
	| start   | -p NoKubernetes-302969              | NoKubernetes-302969       | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-332204         | force-systemd-env-332204  | jenkins | v1.33.1 | 22 Jul 24 00:35 UTC |                     |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:35:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:35:58.281298   54650 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:35:58.281390   54650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:35:58.281397   54650 out.go:304] Setting ErrFile to fd 2...
	I0722 00:35:58.281401   54650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:35:58.281585   54650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:35:58.282104   54650 out.go:298] Setting JSON to false
	I0722 00:35:58.283031   54650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4702,"bootTime":1721603856,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:35:58.283095   54650 start.go:139] virtualization: kvm guest
	I0722 00:35:58.285221   54650 out.go:177] * [force-systemd-env-332204] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:35:58.286452   54650 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:35:58.286505   54650 notify.go:220] Checking for updates...
	I0722 00:35:58.288704   54650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:35:58.289963   54650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:35:58.291073   54650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:35:58.292190   54650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:35:58.293459   54650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0722 00:35:58.295000   54650 config.go:182] Loaded profile config "NoKubernetes-302969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0722 00:35:58.295089   54650 config.go:182] Loaded profile config "kubernetes-upgrade-921436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:35:58.295179   54650 config.go:182] Loaded profile config "pause-998383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:35:58.295263   54650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:35:58.327752   54650 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 00:35:59.411081   54103 start.go:364] duration metric: took 15.942628276s to acquireMachinesLock for "pause-998383"
	I0722 00:35:59.411120   54103 start.go:93] Provisioning new machine with config: &{Name:pause-998383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:pause-998383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:35:59.411234   54103 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 00:35:59.193656   53903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:35:59.193680   53903 machine.go:97] duration metric: took 6.608642275s to provisionDockerMachine
	I0722 00:35:59.193693   53903 start.go:293] postStartSetup for "kubernetes-upgrade-921436" (driver="kvm2")
	I0722 00:35:59.193708   53903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:35:59.193728   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:35:59.194064   53903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:35:59.194089   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:35:59.196736   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.197115   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:35:59.197144   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.197254   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:35:59.197435   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:35:59.197588   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:35:59.197732   53903 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:35:59.276242   53903 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:35:59.280063   53903 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:35:59.280084   53903 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:35:59.280131   53903 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:35:59.280200   53903 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:35:59.280288   53903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:35:59.288835   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:35:59.311097   53903 start.go:296] duration metric: took 117.388087ms for postStartSetup
	I0722 00:35:59.311153   53903 fix.go:56] duration metric: took 6.751691082s for fixHost
	I0722 00:35:59.311177   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:35:59.313957   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.314340   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:35:59.314370   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.314472   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:35:59.314722   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:35:59.314900   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:35:59.315128   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:35:59.315360   53903 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:59.315509   53903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0722 00:35:59.315519   53903 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:35:59.410917   53903 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608559.373891891
	
	I0722 00:35:59.410943   53903 fix.go:216] guest clock: 1721608559.373891891
	I0722 00:35:59.410952   53903 fix.go:229] Guest: 2024-07-22 00:35:59.373891891 +0000 UTC Remote: 2024-07-22 00:35:59.311158233 +0000 UTC m=+29.248124320 (delta=62.733658ms)
	I0722 00:35:59.410973   53903 fix.go:200] guest clock delta is within tolerance: 62.733658ms
	I0722 00:35:59.410978   53903 start.go:83] releasing machines lock for "kubernetes-upgrade-921436", held for 6.851555731s
	I0722 00:35:59.411004   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:35:59.411264   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:35:59.413955   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.414275   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:35:59.414303   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.414459   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:35:59.414986   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:35:59.415161   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .DriverName
	I0722 00:35:59.415240   53903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:35:59.415303   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:35:59.415362   53903 ssh_runner.go:195] Run: cat /version.json
	I0722 00:35:59.415383   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHHostname
	I0722 00:35:59.417930   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.418202   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:35:59.418240   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.418260   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.418375   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:35:59.418569   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:35:59.418638   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:35:59.418665   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:35:59.418750   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:35:59.418838   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHPort
	I0722 00:35:59.418924   53903 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:35:59.418962   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHKeyPath
	I0722 00:35:59.419099   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetSSHUsername
	I0722 00:35:59.419226   53903 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/kubernetes-upgrade-921436/id_rsa Username:docker}
	I0722 00:35:59.525582   53903 ssh_runner.go:195] Run: systemctl --version
	I0722 00:35:59.532067   53903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:35:59.680978   53903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:35:59.686754   53903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:35:59.686817   53903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:35:59.695774   53903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 00:35:59.695797   53903 start.go:495] detecting cgroup driver to use...
	I0722 00:35:59.695857   53903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:35:59.711362   53903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:35:59.726332   53903 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:35:59.726387   53903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:35:59.739924   53903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:35:59.752973   53903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:35:59.899111   53903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:36:00.049232   53903 docker.go:233] disabling docker service ...
	I0722 00:36:00.049314   53903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:36:00.065238   53903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:36:00.077874   53903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:35:58.256587   54597 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:36:00.295134   54597 cni.go:84] Creating CNI manager for ""
	I0722 00:36:00.295150   54597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:36:00.295267   54597 start.go:340] cluster config:
	{Name:NoKubernetes-302969 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-302969 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.204 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:36:00.295415   54597 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:36:00.297476   54597 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-302969
	I0722 00:35:58.328852   54650 start.go:297] selected driver: kvm2
	I0722 00:35:58.328866   54650 start.go:901] validating driver "kvm2" against <nil>
	I0722 00:35:58.328879   54650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:35:58.329525   54650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:36:00.294319   54650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:36:00.310329   54650 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:36:00.310404   54650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:36:00.310673   54650 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 00:36:00.310702   54650 cni.go:84] Creating CNI manager for ""
	I0722 00:36:00.310715   54650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:36:00.310724   54650 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 00:36:00.310796   54650 start.go:340] cluster config:
	{Name:force-systemd-env-332204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-332204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:36:00.310937   54650 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:36:00.312630   54650 out.go:177] * Starting "force-systemd-env-332204" primary control-plane node in "force-systemd-env-332204" cluster
	I0722 00:36:00.210933   53903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:36:00.357019   53903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:36:00.370962   53903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:36:00.388020   53903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:36:00.388074   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.397443   53903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:36:00.397501   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.410448   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.423436   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.435342   53903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:36:00.445402   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.455769   53903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.466804   53903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:36:00.481020   53903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:36:00.492193   53903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:36:00.504116   53903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:00.647593   53903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:36:01.230701   53903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:36:01.230811   53903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:36:01.243109   53903 start.go:563] Will wait 60s for crictl version
	I0722 00:36:01.243200   53903 ssh_runner.go:195] Run: which crictl
	I0722 00:36:01.253083   53903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:36:01.285117   53903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:36:01.285224   53903 ssh_runner.go:195] Run: crio --version
	I0722 00:36:01.315071   53903 ssh_runner.go:195] Run: crio --version
	I0722 00:36:01.350285   53903 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:36:00.298747   54597 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0722 00:36:00.406470   54597 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0722 00:36:00.406673   54597 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/NoKubernetes-302969/config.json ...
	I0722 00:36:00.406904   54597 start.go:360] acquireMachinesLock for NoKubernetes-302969: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:36:00.313787   54650 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:36:00.313836   54650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:36:00.313849   54650 cache.go:56] Caching tarball of preloaded images
	I0722 00:36:00.313916   54650 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:36:00.313926   54650 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:36:00.314002   54650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/force-systemd-env-332204/config.json ...
	I0722 00:36:00.314018   54650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/force-systemd-env-332204/config.json: {Name:mkb7c246d3a928a4325776fb3f4e296c989a365c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:36:00.314133   54650 start.go:360] acquireMachinesLock for force-systemd-env-332204: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:35:59.413857   54103 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 00:35:59.414080   54103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:35:59.414118   54103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:35:59.429982   54103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0722 00:35:59.430383   54103 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:35:59.430918   54103 main.go:141] libmachine: Using API Version  1
	I0722 00:35:59.430930   54103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:35:59.431220   54103 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:35:59.431418   54103 main.go:141] libmachine: (pause-998383) Calling .GetMachineName
	I0722 00:35:59.431557   54103 main.go:141] libmachine: (pause-998383) Calling .DriverName
	I0722 00:35:59.431773   54103 start.go:159] libmachine.API.Create for "pause-998383" (driver="kvm2")
	I0722 00:35:59.431792   54103 client.go:168] LocalClient.Create starting
	I0722 00:35:59.431817   54103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0722 00:35:59.431849   54103 main.go:141] libmachine: Decoding PEM data...
	I0722 00:35:59.431859   54103 main.go:141] libmachine: Parsing certificate...
	I0722 00:35:59.431919   54103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0722 00:35:59.431933   54103 main.go:141] libmachine: Decoding PEM data...
	I0722 00:35:59.431943   54103 main.go:141] libmachine: Parsing certificate...
	I0722 00:35:59.431954   54103 main.go:141] libmachine: Running pre-create checks...
	I0722 00:35:59.431959   54103 main.go:141] libmachine: (pause-998383) Calling .PreCreateCheck
	I0722 00:35:59.432352   54103 main.go:141] libmachine: (pause-998383) Calling .GetConfigRaw
	I0722 00:35:59.432805   54103 main.go:141] libmachine: Creating machine...
	I0722 00:35:59.432815   54103 main.go:141] libmachine: (pause-998383) Calling .Create
	I0722 00:35:59.432967   54103 main.go:141] libmachine: (pause-998383) Creating KVM machine...
	I0722 00:35:59.434294   54103 main.go:141] libmachine: (pause-998383) DBG | found existing default KVM network
	I0722 00:35:59.435272   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.435119   54688 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:42:37} reservation:<nil>}
	I0722 00:35:59.436050   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.435944   54688 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fb50}
	I0722 00:35:59.436067   54103 main.go:141] libmachine: (pause-998383) DBG | created network xml: 
	I0722 00:35:59.436077   54103 main.go:141] libmachine: (pause-998383) DBG | <network>
	I0722 00:35:59.436097   54103 main.go:141] libmachine: (pause-998383) DBG |   <name>mk-pause-998383</name>
	I0722 00:35:59.436105   54103 main.go:141] libmachine: (pause-998383) DBG |   <dns enable='no'/>
	I0722 00:35:59.436110   54103 main.go:141] libmachine: (pause-998383) DBG |   
	I0722 00:35:59.436118   54103 main.go:141] libmachine: (pause-998383) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0722 00:35:59.436124   54103 main.go:141] libmachine: (pause-998383) DBG |     <dhcp>
	I0722 00:35:59.436131   54103 main.go:141] libmachine: (pause-998383) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0722 00:35:59.436142   54103 main.go:141] libmachine: (pause-998383) DBG |     </dhcp>
	I0722 00:35:59.436164   54103 main.go:141] libmachine: (pause-998383) DBG |   </ip>
	I0722 00:35:59.436178   54103 main.go:141] libmachine: (pause-998383) DBG |   
	I0722 00:35:59.436183   54103 main.go:141] libmachine: (pause-998383) DBG | </network>
	I0722 00:35:59.436187   54103 main.go:141] libmachine: (pause-998383) DBG | 
	I0722 00:35:59.441291   54103 main.go:141] libmachine: (pause-998383) DBG | trying to create private KVM network mk-pause-998383 192.168.50.0/24...
	I0722 00:35:59.507572   54103 main.go:141] libmachine: (pause-998383) DBG | private KVM network mk-pause-998383 192.168.50.0/24 created
	I0722 00:35:59.507592   54103 main.go:141] libmachine: (pause-998383) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383 ...
	I0722 00:35:59.507710   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.507533   54688 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:35:59.507735   54103 main.go:141] libmachine: (pause-998383) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 00:35:59.507872   54103 main.go:141] libmachine: (pause-998383) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:35:59.736053   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.735942   54688 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383/id_rsa...
	I0722 00:35:59.808278   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.808145   54688 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383/pause-998383.rawdisk...
	I0722 00:35:59.808299   54103 main.go:141] libmachine: (pause-998383) DBG | Writing magic tar header
	I0722 00:35:59.808310   54103 main.go:141] libmachine: (pause-998383) DBG | Writing SSH key tar header
	I0722 00:35:59.808320   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:35:59.808255   54688 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383 ...
	I0722 00:35:59.808334   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383
	I0722 00:35:59.808383   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383 (perms=drwx------)
	I0722 00:35:59.808403   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0722 00:35:59.808428   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0722 00:35:59.808439   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0722 00:35:59.808450   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0722 00:35:59.808456   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 00:35:59.808480   54103 main.go:141] libmachine: (pause-998383) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 00:35:59.808489   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:35:59.808496   54103 main.go:141] libmachine: (pause-998383) Creating domain...
	I0722 00:35:59.808524   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0722 00:35:59.808534   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 00:35:59.808542   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home/jenkins
	I0722 00:35:59.808545   54103 main.go:141] libmachine: (pause-998383) DBG | Checking permissions on dir: /home
	I0722 00:35:59.808551   54103 main.go:141] libmachine: (pause-998383) DBG | Skipping /home - not owner
	I0722 00:35:59.809851   54103 main.go:141] libmachine: (pause-998383) define libvirt domain using xml: 
	I0722 00:35:59.809868   54103 main.go:141] libmachine: (pause-998383) <domain type='kvm'>
	I0722 00:35:59.809877   54103 main.go:141] libmachine: (pause-998383)   <name>pause-998383</name>
	I0722 00:35:59.809884   54103 main.go:141] libmachine: (pause-998383)   <memory unit='MiB'>2048</memory>
	I0722 00:35:59.809891   54103 main.go:141] libmachine: (pause-998383)   <vcpu>2</vcpu>
	I0722 00:35:59.809901   54103 main.go:141] libmachine: (pause-998383)   <features>
	I0722 00:35:59.809908   54103 main.go:141] libmachine: (pause-998383)     <acpi/>
	I0722 00:35:59.809914   54103 main.go:141] libmachine: (pause-998383)     <apic/>
	I0722 00:35:59.809922   54103 main.go:141] libmachine: (pause-998383)     <pae/>
	I0722 00:35:59.809927   54103 main.go:141] libmachine: (pause-998383)     
	I0722 00:35:59.809931   54103 main.go:141] libmachine: (pause-998383)   </features>
	I0722 00:35:59.809934   54103 main.go:141] libmachine: (pause-998383)   <cpu mode='host-passthrough'>
	I0722 00:35:59.809938   54103 main.go:141] libmachine: (pause-998383)   
	I0722 00:35:59.809941   54103 main.go:141] libmachine: (pause-998383)   </cpu>
	I0722 00:35:59.809945   54103 main.go:141] libmachine: (pause-998383)   <os>
	I0722 00:35:59.809948   54103 main.go:141] libmachine: (pause-998383)     <type>hvm</type>
	I0722 00:35:59.809952   54103 main.go:141] libmachine: (pause-998383)     <boot dev='cdrom'/>
	I0722 00:35:59.809955   54103 main.go:141] libmachine: (pause-998383)     <boot dev='hd'/>
	I0722 00:35:59.809960   54103 main.go:141] libmachine: (pause-998383)     <bootmenu enable='no'/>
	I0722 00:35:59.809963   54103 main.go:141] libmachine: (pause-998383)   </os>
	I0722 00:35:59.809978   54103 main.go:141] libmachine: (pause-998383)   <devices>
	I0722 00:35:59.809986   54103 main.go:141] libmachine: (pause-998383)     <disk type='file' device='cdrom'>
	I0722 00:35:59.809997   54103 main.go:141] libmachine: (pause-998383)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383/boot2docker.iso'/>
	I0722 00:35:59.810004   54103 main.go:141] libmachine: (pause-998383)       <target dev='hdc' bus='scsi'/>
	I0722 00:35:59.810008   54103 main.go:141] libmachine: (pause-998383)       <readonly/>
	I0722 00:35:59.810014   54103 main.go:141] libmachine: (pause-998383)     </disk>
	I0722 00:35:59.810019   54103 main.go:141] libmachine: (pause-998383)     <disk type='file' device='disk'>
	I0722 00:35:59.810026   54103 main.go:141] libmachine: (pause-998383)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 00:35:59.810033   54103 main.go:141] libmachine: (pause-998383)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/pause-998383/pause-998383.rawdisk'/>
	I0722 00:35:59.810036   54103 main.go:141] libmachine: (pause-998383)       <target dev='hda' bus='virtio'/>
	I0722 00:35:59.810043   54103 main.go:141] libmachine: (pause-998383)     </disk>
	I0722 00:35:59.810046   54103 main.go:141] libmachine: (pause-998383)     <interface type='network'>
	I0722 00:35:59.810051   54103 main.go:141] libmachine: (pause-998383)       <source network='mk-pause-998383'/>
	I0722 00:35:59.810054   54103 main.go:141] libmachine: (pause-998383)       <model type='virtio'/>
	I0722 00:35:59.810058   54103 main.go:141] libmachine: (pause-998383)     </interface>
	I0722 00:35:59.810061   54103 main.go:141] libmachine: (pause-998383)     <interface type='network'>
	I0722 00:35:59.810068   54103 main.go:141] libmachine: (pause-998383)       <source network='default'/>
	I0722 00:35:59.810071   54103 main.go:141] libmachine: (pause-998383)       <model type='virtio'/>
	I0722 00:35:59.810075   54103 main.go:141] libmachine: (pause-998383)     </interface>
	I0722 00:35:59.810078   54103 main.go:141] libmachine: (pause-998383)     <serial type='pty'>
	I0722 00:35:59.810082   54103 main.go:141] libmachine: (pause-998383)       <target port='0'/>
	I0722 00:35:59.810086   54103 main.go:141] libmachine: (pause-998383)     </serial>
	I0722 00:35:59.810089   54103 main.go:141] libmachine: (pause-998383)     <console type='pty'>
	I0722 00:35:59.810093   54103 main.go:141] libmachine: (pause-998383)       <target type='serial' port='0'/>
	I0722 00:35:59.810096   54103 main.go:141] libmachine: (pause-998383)     </console>
	I0722 00:35:59.810102   54103 main.go:141] libmachine: (pause-998383)     <rng model='virtio'>
	I0722 00:35:59.810107   54103 main.go:141] libmachine: (pause-998383)       <backend model='random'>/dev/random</backend>
	I0722 00:35:59.810113   54103 main.go:141] libmachine: (pause-998383)     </rng>
	I0722 00:35:59.810116   54103 main.go:141] libmachine: (pause-998383)     
	I0722 00:35:59.810120   54103 main.go:141] libmachine: (pause-998383)     
	I0722 00:35:59.810124   54103 main.go:141] libmachine: (pause-998383)   </devices>
	I0722 00:35:59.810130   54103 main.go:141] libmachine: (pause-998383) </domain>
	I0722 00:35:59.810136   54103 main.go:141] libmachine: (pause-998383) 
	I0722 00:35:59.814884   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:68:be:64 in network default
	I0722 00:35:59.815561   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:35:59.815579   54103 main.go:141] libmachine: (pause-998383) Ensuring networks are active...
	I0722 00:35:59.816154   54103 main.go:141] libmachine: (pause-998383) Ensuring network default is active
	I0722 00:35:59.816558   54103 main.go:141] libmachine: (pause-998383) Ensuring network mk-pause-998383 is active
	I0722 00:35:59.817086   54103 main.go:141] libmachine: (pause-998383) Getting domain xml...
	I0722 00:35:59.817778   54103 main.go:141] libmachine: (pause-998383) Creating domain...
	I0722 00:36:01.071906   54103 main.go:141] libmachine: (pause-998383) Waiting to get IP...
	I0722 00:36:01.072714   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:01.073123   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:01.073149   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:01.073091   54688 retry.go:31] will retry after 276.388403ms: waiting for machine to come up
	I0722 00:36:01.351617   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:01.352092   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:01.352115   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:01.352040   54688 retry.go:31] will retry after 258.728553ms: waiting for machine to come up
	I0722 00:36:01.612620   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:01.613064   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:01.613085   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:01.613028   54688 retry.go:31] will retry after 317.554689ms: waiting for machine to come up
	I0722 00:36:01.932898   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:01.933435   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:01.933459   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:01.933395   54688 retry.go:31] will retry after 570.300912ms: waiting for machine to come up
	I0722 00:36:02.505263   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:02.505821   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:02.505845   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:02.505773   54688 retry.go:31] will retry after 467.57629ms: waiting for machine to come up
	I0722 00:36:02.975670   54103 main.go:141] libmachine: (pause-998383) DBG | domain pause-998383 has defined MAC address 52:54:00:c5:7f:56 in network mk-pause-998383
	I0722 00:36:02.976329   54103 main.go:141] libmachine: (pause-998383) DBG | unable to find current IP address of domain pause-998383 in network mk-pause-998383
	I0722 00:36:02.976354   54103 main.go:141] libmachine: (pause-998383) DBG | I0722 00:36:02.976222   54688 retry.go:31] will retry after 647.540748ms: waiting for machine to come up
	I0722 00:36:01.351728   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) Calling .GetIP
	I0722 00:36:01.354560   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:36:01.354959   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:e7:6a", ip: ""} in network mk-kubernetes-upgrade-921436: {Iface:virbr1 ExpiryTime:2024-07-22 01:35:03 +0000 UTC Type:0 Mac:52:54:00:f8:e7:6a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:kubernetes-upgrade-921436 Clientid:01:52:54:00:f8:e7:6a}
	I0722 00:36:01.354986   53903 main.go:141] libmachine: (kubernetes-upgrade-921436) DBG | domain kubernetes-upgrade-921436 has defined IP address 192.168.39.95 and MAC address 52:54:00:f8:e7:6a in network mk-kubernetes-upgrade-921436
	I0722 00:36:01.355251   53903 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:36:01.359258   53903 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:36:01.359389   53903 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:36:01.359446   53903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:36:01.397033   53903 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:36:01.397054   53903 crio.go:433] Images already preloaded, skipping extraction
	I0722 00:36:01.397105   53903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:36:01.431417   53903 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:36:01.431441   53903 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:36:01.431448   53903 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:36:01.431561   53903 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-921436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:36:01.431642   53903 ssh_runner.go:195] Run: crio config
	I0722 00:36:01.486314   53903 cni.go:84] Creating CNI manager for ""
	I0722 00:36:01.486336   53903 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:36:01.486346   53903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:36:01.486366   53903 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-921436 NodeName:kubernetes-upgrade-921436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:36:01.486509   53903 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-921436"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:36:01.486567   53903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:36:01.497298   53903 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:36:01.497366   53903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:36:01.507463   53903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0722 00:36:01.524608   53903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:36:01.541133   53903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0722 00:36:01.560899   53903 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0722 00:36:01.564633   53903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:01.712725   53903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:36:01.731074   53903 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436 for IP: 192.168.39.95
	I0722 00:36:01.731098   53903 certs.go:194] generating shared ca certs ...
	I0722 00:36:01.731114   53903 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:36:01.731271   53903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:36:01.731329   53903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:36:01.731343   53903 certs.go:256] generating profile certs ...
	I0722 00:36:01.731447   53903 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/client.key
	I0722 00:36:01.731534   53903 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key.4b3b72e2
	I0722 00:36:01.731582   53903 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key
	I0722 00:36:01.731721   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:36:01.731757   53903 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:36:01.731770   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:36:01.731803   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:36:01.731832   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:36:01.731858   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:36:01.731910   53903 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:36:01.732693   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:36:01.763652   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:36:01.788915   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:36:01.818863   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:36:01.845042   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 00:36:01.870153   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:36:01.892177   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:36:01.913993   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kubernetes-upgrade-921436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:36:01.936488   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:36:01.959222   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:36:02.027273   53903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:36:02.098946   53903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:36:02.171079   53903 ssh_runner.go:195] Run: openssl version
	I0722 00:36:02.198627   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:36:02.281622   53903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:36:02.324947   53903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:36:02.325020   53903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:36:02.385466   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:36:02.438461   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:36:02.548227   53903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:36:02.562494   53903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:36:02.562570   53903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:36:02.660518   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:36:02.676903   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:36:02.776513   53903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:36:02.814083   53903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:36:02.814152   53903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:36:02.862152   53903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:36:02.930561   53903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:36:02.947264   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:36:02.969582   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:36:03.018732   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:36:03.040506   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:36:03.052330   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:36:03.096375   53903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:36:03.154902   53903 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-921436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-921436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:36:03.154996   53903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:36:03.155055   53903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:36:03.247048   53903 cri.go:89] found id: "3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c"
	I0722 00:36:03.247080   53903 cri.go:89] found id: "c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc"
	I0722 00:36:03.247085   53903 cri.go:89] found id: "ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0"
	I0722 00:36:03.247099   53903 cri.go:89] found id: "7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362"
	I0722 00:36:03.247103   53903 cri.go:89] found id: "d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454"
	I0722 00:36:03.247108   53903 cri.go:89] found id: "97e3d166190a56174b3a9eb0ecfba9898ad30b6e102852b99e95a2fbb02f042a"
	I0722 00:36:03.247112   53903 cri.go:89] found id: "4339ac1d58dce9d6d02cec51310d071992c8b58a69aa69bf1d9380d8546cae56"
	I0722 00:36:03.247116   53903 cri.go:89] found id: "cebfaca01dab55f6deb7668b75af1352663aaea729014bdb6d9925ded1d2a528"
	I0722 00:36:03.247120   53903 cri.go:89] found id: "8339df73a4d1c5c3c534b9e6556e37b40f67f9a9c36253c1662247cfc9af2bb0"
	I0722 00:36:03.247129   53903 cri.go:89] found id: "c5d255b2b267cc5288f12bc3f451f7fb14b06da8d276f2ea5f1780a6ac4ad40f"
	I0722 00:36:03.247133   53903 cri.go:89] found id: "50d6181b0b154bd06ce59364f759162194f1f8aa2af0a730f1a902d050099718"
	I0722 00:36:03.247137   53903 cri.go:89] found id: "b227b0df34b495aecdc42b69f5150a544b963725122df81ec54ab0915eddfc12"
	I0722 00:36:03.247141   53903 cri.go:89] found id: "5c7bf83eb91ca579e83fcc870b62735c28cbe430e0adf2c4d97a8b14a45d7fc9"
	I0722 00:36:03.247145   53903 cri.go:89] found id: ""
	I0722 00:36:03.247201   53903 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.168356967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721608595168332972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a49bc8df-819b-439d-b9bb-2e7be0bb4fb9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.169129457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d31112e8-88ad-41bf-b511-8b45b09de903 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.169207018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d31112e8-88ad-41bf-b511-8b45b09de903 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.169628464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:862447162e746017ba1bc7bff22c09095694c7ef2abb969be01ba3464d372675,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721608591113652673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa312fe1594487141a028e46338e19a0476a4de66ff35001bf61b3c598e8f920,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721608591113724188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cac11c4ec624cf7ca50e0b67996a5021c043625f6146446895f85408aaf955,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721608586302559797,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470abef8b59b4ca880e10cbbbd9247385c0c7cbb1d38e4b4637f0f08d788d62,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721608586264196879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cade1bd9680e9fbff7408c2e4f0481320811d530355f42e4f377657f10fe1f,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721608586270479557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830ca7c3c3ffe76035545341707a460c981f4efa2712d948e1e97d2510a43dbf,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e89d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721608586247293383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdf7764da0eb69d0fe207942776e6ee85fede1974ca43c4bfde102722d1abf5,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608584121111937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080c6a82080b8183683d573c95404ac6383286267411860dd030dd4fa2b3001,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608576777247462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721608562619683548,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a3a269a2ec438aff85168e8a61a662df00e44d2bd782d8bdfa715a9ac48778,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563380215101,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb0dd2e6d8d5ac79b792bcf34ce39752aa0acd7de925fe3d4c42f98a94cf725,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563246154176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e8
9d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721608562563672214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd
5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721608562500511509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721608562465505585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721608562392475763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721608562298558322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d31112e8-88ad-41bf-b511-8b45b09de903 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.207495750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c109b4c-965a-495e-85a4-8e0498f4db82 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.207584564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c109b4c-965a-495e-85a4-8e0498f4db82 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.209089783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd3de935-bf14-4cf0-a914-2a0600909a4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.209451003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721608595209427804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd3de935-bf14-4cf0-a914-2a0600909a4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.209951068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c79dcf44-a0e1-47e5-ac6c-a4b87783bc93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.210059281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c79dcf44-a0e1-47e5-ac6c-a4b87783bc93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.211039671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:862447162e746017ba1bc7bff22c09095694c7ef2abb969be01ba3464d372675,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721608591113652673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa312fe1594487141a028e46338e19a0476a4de66ff35001bf61b3c598e8f920,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721608591113724188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cac11c4ec624cf7ca50e0b67996a5021c043625f6146446895f85408aaf955,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721608586302559797,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470abef8b59b4ca880e10cbbbd9247385c0c7cbb1d38e4b4637f0f08d788d62,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721608586264196879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cade1bd9680e9fbff7408c2e4f0481320811d530355f42e4f377657f10fe1f,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721608586270479557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830ca7c3c3ffe76035545341707a460c981f4efa2712d948e1e97d2510a43dbf,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e89d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721608586247293383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdf7764da0eb69d0fe207942776e6ee85fede1974ca43c4bfde102722d1abf5,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608584121111937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080c6a82080b8183683d573c95404ac6383286267411860dd030dd4fa2b3001,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608576777247462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721608562619683548,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a3a269a2ec438aff85168e8a61a662df00e44d2bd782d8bdfa715a9ac48778,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563380215101,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb0dd2e6d8d5ac79b792bcf34ce39752aa0acd7de925fe3d4c42f98a94cf725,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563246154176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e8
9d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721608562563672214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd
5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721608562500511509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721608562465505585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721608562392475763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721608562298558322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c79dcf44-a0e1-47e5-ac6c-a4b87783bc93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.258331750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e34126eb-5535-408a-8507-9afdd2e092e3 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.258428883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e34126eb-5535-408a-8507-9afdd2e092e3 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.259435657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d04b06a1-afe2-4169-b625-923e994ac174 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.259801122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721608595259778856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d04b06a1-afe2-4169-b625-923e994ac174 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.260309959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3708cc43-4508-4e04-8807-5fa52ee831b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.260364027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3708cc43-4508-4e04-8807-5fa52ee831b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.260711467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:862447162e746017ba1bc7bff22c09095694c7ef2abb969be01ba3464d372675,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721608591113652673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa312fe1594487141a028e46338e19a0476a4de66ff35001bf61b3c598e8f920,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721608591113724188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cac11c4ec624cf7ca50e0b67996a5021c043625f6146446895f85408aaf955,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721608586302559797,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470abef8b59b4ca880e10cbbbd9247385c0c7cbb1d38e4b4637f0f08d788d62,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721608586264196879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cade1bd9680e9fbff7408c2e4f0481320811d530355f42e4f377657f10fe1f,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721608586270479557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830ca7c3c3ffe76035545341707a460c981f4efa2712d948e1e97d2510a43dbf,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e89d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721608586247293383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdf7764da0eb69d0fe207942776e6ee85fede1974ca43c4bfde102722d1abf5,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608584121111937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080c6a82080b8183683d573c95404ac6383286267411860dd030dd4fa2b3001,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608576777247462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721608562619683548,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a3a269a2ec438aff85168e8a61a662df00e44d2bd782d8bdfa715a9ac48778,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563380215101,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb0dd2e6d8d5ac79b792bcf34ce39752aa0acd7de925fe3d4c42f98a94cf725,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563246154176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e8
9d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721608562563672214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd
5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721608562500511509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721608562465505585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721608562392475763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721608562298558322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3708cc43-4508-4e04-8807-5fa52ee831b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.294368732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3d04afc-ff61-421f-a935-320d86b18967 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.294442809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3d04afc-ff61-421f-a935-320d86b18967 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.295395278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee9c227b-4680-44ef-b9d7-9a4afb1d56ad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.295740195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721608595295717968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee9c227b-4680-44ef-b9d7-9a4afb1d56ad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.296319491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25ad09cf-5e4c-47c9-bbff-2976414559ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.296375864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25ad09cf-5e4c-47c9-bbff-2976414559ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:36:35 kubernetes-upgrade-921436 crio[2285]: time="2024-07-22 00:36:35.296700917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:862447162e746017ba1bc7bff22c09095694c7ef2abb969be01ba3464d372675,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721608591113652673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa312fe1594487141a028e46338e19a0476a4de66ff35001bf61b3c598e8f920,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721608591113724188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cac11c4ec624cf7ca50e0b67996a5021c043625f6146446895f85408aaf955,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721608586302559797,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3470abef8b59b4ca880e10cbbbd9247385c0c7cbb1d38e4b4637f0f08d788d62,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721608586264196879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cade1bd9680e9fbff7408c2e4f0481320811d530355f42e4f377657f10fe1f,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721608586270479557,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830ca7c3c3ffe76035545341707a460c981f4efa2712d948e1e97d2510a43dbf,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e89d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721608586247293383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdf7764da0eb69d0fe207942776e6ee85fede1974ca43c4bfde102722d1abf5,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608584121111937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c080c6a82080b8183683d573c95404ac6383286267411860dd030dd4fa2b3001,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721608576777247462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf,PodSandboxId:e79f275cedf6e3838ebee4b03ce1251524f325943d4bb0b1f5c5594a98e223e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721608562619683548,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f96b52a-7290-47ff-9c72-ee7a6308aeea,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a3a269a2ec438aff85168e8a61a662df00e44d2bd782d8bdfa715a9ac48778,PodSandboxId:149001e184d148263ec7f636586225b57f02ce9df8e130f8c774ebd727665eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563380215101,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gnnsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb779676-2cd1-4112-b2fb-5fb21fe70e55,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb0dd2e6d8d5ac79b792bcf34ce39752aa0acd7de925fe3d4c42f98a94cf725,PodSandboxId:a8b822d0f635672adaa29def9fa185f724522ec454f49a90fe316675763eea2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721608563246154176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-l47zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec35cf3-86e2-4614-8230-26da5e722544,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c,PodSandboxId:0891dcc1d6eab921910f12ca2282104d27ca76c981d6b39ef9dd29f825e8
9d19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721608562563672214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48527c4b511e5eadb25a462cd2c92623,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc,PodSandboxId:a6b2447bb94e0cf7803300ee359bb4c4b94bdcd
5956122da29538e0b6aed2200,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721608562500511509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a605f1fe22a9ee7e6ac5952a9f6b2fb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0,PodSandboxId:e1efd9bf361df18991f8be824a9c45368109afb8a7ca5826bee089170a4ba76e,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721608562465505585,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5896fd0f5347aa8d3d434964b50799,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362,PodSandboxId:013a377e10ec62c41059c61e2fe6cc4e9d37ee8cd42ffcc312d6268f13673b46,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721608562392475763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-921436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896328ba122dbf462ab053163249509b,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454,PodSandboxId:5feba58a43acbc5dc1b29102d7f6a3c69946e874d75cfecdb9aab8b8964da160,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721608562298558322,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7jks8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4bbc4-8af7-4015-bb7b-0241af992fce,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25ad09cf-5e4c-47c9-bbff-2976414559ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa312fe159448       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   e79f275cedf6e       storage-provisioner
	862447162e746       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   4 seconds ago       Running             kube-proxy                2                   5feba58a43acb       kube-proxy-7jks8
	a6cac11c4ec62       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 seconds ago       Running             kube-apiserver            2                   013a377e10ec6       kube-apiserver-kubernetes-upgrade-921436
	73cade1bd9680       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 seconds ago       Running             etcd                      2                   a6b2447bb94e0       etcd-kubernetes-upgrade-921436
	3470abef8b59b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 seconds ago       Running             kube-scheduler            2                   e1efd9bf361df       kube-scheduler-kubernetes-upgrade-921436
	830ca7c3c3ffe       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 seconds ago       Running             kube-controller-manager   2                   0891dcc1d6eab       kube-controller-manager-kubernetes-upgrade-921436
	2cdf7764da0eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 seconds ago      Running             coredns                   2                   a8b822d0f6356       coredns-5cfdc65f69-l47zs
	c080c6a82080b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   149001e184d14       coredns-5cfdc65f69-gnnsx
	05a3a269a2ec4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   1                   149001e184d14       coredns-5cfdc65f69-gnnsx
	cdb0dd2e6d8d5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   32 seconds ago      Exited              coredns                   1                   a8b822d0f6356       coredns-5cfdc65f69-l47zs
	fa815bfb10712       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago      Exited              storage-provisioner       1                   e79f275cedf6e       storage-provisioner
	3f94fc6f55e64       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   32 seconds ago      Exited              kube-controller-manager   1                   0891dcc1d6eab       kube-controller-manager-kubernetes-upgrade-921436
	c24efa263783d       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   32 seconds ago      Exited              etcd                      1                   a6b2447bb94e0       etcd-kubernetes-upgrade-921436
	ca6283be721f5       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   32 seconds ago      Exited              kube-scheduler            1                   e1efd9bf361df       kube-scheduler-kubernetes-upgrade-921436
	7e819aa4298d2       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   32 seconds ago      Exited              kube-apiserver            1                   013a377e10ec6       kube-apiserver-kubernetes-upgrade-921436
	d0531c3109410       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   33 seconds ago      Exited              kube-proxy                1                   5feba58a43acb       kube-proxy-7jks8
	
	
	==> coredns [05a3a269a2ec438aff85168e8a61a662df00e44d2bd782d8bdfa715a9ac48778] <==
	
	
	==> coredns [2cdf7764da0eb69d0fe207942776e6ee85fede1974ca43c4bfde102722d1abf5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c080c6a82080b8183683d573c95404ac6383286267411860dd030dd4fa2b3001] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cdb0dd2e6d8d5ac79b792bcf34ce39752aa0acd7de925fe3d4c42f98a94cf725] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-921436
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-921436
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:35:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-921436
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:36:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:36:30 +0000   Mon, 22 Jul 2024 00:35:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:36:30 +0000   Mon, 22 Jul 2024 00:35:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:36:30 +0000   Mon, 22 Jul 2024 00:35:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:36:30 +0000   Mon, 22 Jul 2024 00:35:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    kubernetes-upgrade-921436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 602fba1941ba468abfb06e91c6d86724
	  System UUID:                602fba19-41ba-468a-bfb0-6e91c6d86724
	  Boot ID:                    76757aa2-ed55-4361-9605-10dbb17345d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-gnnsx                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 coredns-5cfdc65f69-l47zs                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-kubernetes-upgrade-921436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kube-apiserver-kubernetes-upgrade-921436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-921436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-proxy-7jks8                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-kubernetes-upgrade-921436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 60s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    74s (x8 over 76s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 76s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  74s (x8 over 76s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           63s                node-controller  Node kubernetes-upgrade-921436 event: Registered Node kubernetes-upgrade-921436 in Controller
	  Normal  RegisteredNode           25s                node-controller  Node kubernetes-upgrade-921436 event: Registered Node kubernetes-upgrade-921436 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-921436 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-921436 event: Registered Node kubernetes-upgrade-921436 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.049474] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.071720] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086870] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.209650] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.133341] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.251362] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +3.956206] systemd-fstab-generator[729]: Ignoring "noauto" option for root device
	[  +1.952306] systemd-fstab-generator[849]: Ignoring "noauto" option for root device
	[  +0.066281] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.516073] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.093397] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.006724] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.617005] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.087256] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.063733] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.171563] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.143515] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[Jul22 00:36] systemd-fstab-generator[2270]: Ignoring "noauto" option for root device
	[  +1.059703] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +5.096905] kauditd_printk_skb: 229 callbacks suppressed
	[ +18.863424] systemd-fstab-generator[3680]: Ignoring "noauto" option for root device
	[  +0.090752] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.559472] kauditd_printk_skb: 37 callbacks suppressed
	[  +1.259890] systemd-fstab-generator[4118]: Ignoring "noauto" option for root device
	
	
	==> etcd [73cade1bd9680e9fbff7408c2e4f0481320811d530355f42e4f377657f10fe1f] <==
	{"level":"info","ts":"2024-07-22T00:36:28.193166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 4"}
	{"level":"info","ts":"2024-07-22T00:36:28.193177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 4"}
	{"level":"info","ts":"2024-07-22T00:36:28.193184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 4"}
	{"level":"info","ts":"2024-07-22T00:36:28.592018Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:kubernetes-upgrade-921436 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:36:28.592089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:36:28.592467Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:36:28.592563Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:36:28.592608Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:36:28.593625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:36:28.593666Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:36:28.59504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:36:28.597441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.95:2379"}
	{"level":"info","ts":"2024-07-22T00:36:33.812622Z","caller":"traceutil/trace.go:171","msg":"trace[2011516785] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"151.099726ms","start":"2024-07-22T00:36:33.661206Z","end":"2024-07-22T00:36:33.812306Z","steps":["trace[2011516785] 'process raft request'  (duration: 150.982861ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:36:33.871785Z","caller":"traceutil/trace.go:171","msg":"trace[118085422] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"208.947834ms","start":"2024-07-22T00:36:33.662823Z","end":"2024-07-22T00:36:33.871771Z","steps":["trace[118085422] 'process raft request'  (duration: 207.947914ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:36:34.04447Z","caller":"traceutil/trace.go:171","msg":"trace[1719176639] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"149.187528ms","start":"2024-07-22T00:36:33.895266Z","end":"2024-07-22T00:36:34.044453Z","steps":["trace[1719176639] 'process raft request'  (duration: 148.544962ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:36:34.344689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.460267ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6455787847680352773 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:539 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3970 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-22T00:36:34.344888Z","caller":"traceutil/trace.go:171","msg":"trace[805558403] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"215.716738ms","start":"2024-07-22T00:36:34.129159Z","end":"2024-07-22T00:36:34.344876Z","steps":["trace[805558403] 'process raft request'  (duration: 215.661111ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:36:34.345113Z","caller":"traceutil/trace.go:171","msg":"trace[1517055294] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"447.347533ms","start":"2024-07-22T00:36:33.897751Z","end":"2024-07-22T00:36:34.345099Z","steps":["trace[1517055294] 'process raft request'  (duration: 309.992009ms)","trace[1517055294] 'compare'  (duration: 135.853617ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:36:34.345197Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T00:36:33.897736Z","time spent":"447.419154ms","remote":"127.0.0.1:50930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4019,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:539 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3970 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-07-22T00:36:34.345347Z","caller":"traceutil/trace.go:171","msg":"trace[1483037055] linearizableReadLoop","detail":"{readStateIndex:582; appliedIndex:580; }","duration":"362.578957ms","start":"2024-07-22T00:36:33.982758Z","end":"2024-07-22T00:36:34.345337Z","steps":["trace[1483037055] 'read index received'  (duration: 61.010478ms)","trace[1483037055] 'applied index is now lower than readState.Index'  (duration: 301.56757ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:36:34.345551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.7755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4186"}
	{"level":"info","ts":"2024-07-22T00:36:34.345622Z","caller":"traceutil/trace.go:171","msg":"trace[224196060] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:547; }","duration":"362.859763ms","start":"2024-07-22T00:36:33.982753Z","end":"2024-07-22T00:36:34.345613Z","steps":["trace[224196060] 'agreement among raft nodes before linearized reading'  (duration: 362.720833ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:36:34.345665Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T00:36:33.98272Z","time spent":"362.938029ms","remote":"127.0.0.1:50654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":4208,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"warn","ts":"2024-07-22T00:36:34.346364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.229129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2024-07-22T00:36:34.346445Z","caller":"traceutil/trace.go:171","msg":"trace[612552823] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:547; }","duration":"221.315003ms","start":"2024-07-22T00:36:34.125121Z","end":"2024-07-22T00:36:34.346436Z","steps":["trace[612552823] 'agreement among raft nodes before linearized reading'  (duration: 220.674616ms)"],"step_count":1}
	
	
	==> etcd [c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc] <==
	{"level":"info","ts":"2024-07-22T00:36:04.963044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:36:04.963096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgPreVoteResp from a71e7bac075997 at term 2"}
	{"level":"info","ts":"2024-07-22T00:36:04.963111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:36:04.963123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-07-22T00:36:04.963132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:36:04.963138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-07-22T00:36:04.965019Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:kubernetes-upgrade-921436 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:36:04.965032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:36:04.965393Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:36:04.965463Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:36:04.965051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:36:04.966181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:36:04.966229Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:36:04.967078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.95:2379"}
	{"level":"info","ts":"2024-07-22T00:36:04.96709Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:36:13.91262Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T00:36:13.912676Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-921436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	{"level":"warn","ts":"2024-07-22T00:36:13.91275Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:36:13.912842Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:36:13.933527Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T00:36:13.933607Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T00:36:13.933683Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a71e7bac075997","current-leader-member-id":"a71e7bac075997"}
	{"level":"info","ts":"2024-07-22T00:36:13.936753Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-22T00:36:13.936847Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-22T00:36:13.936866Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-921436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	
	
	==> kernel <==
	 00:36:35 up 1 min,  0 users,  load average: 1.46, 0.40, 0.14
	Linux kubernetes-upgrade-921436 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362] <==
	W0722 00:36:23.040623       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.052455       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.066385       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.138592       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.161700       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.241073       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.268265       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.274693       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.338044       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.370537       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.386790       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.464304       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.486710       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.488436       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.543562       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.554537       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.558198       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.572108       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.611090       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.639794       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.699247       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.764867       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.884349       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:23.906735       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:36:24.049433       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a6cac11c4ec624cf7ca50e0b67996a5021c043625f6146446895f85408aaf955] <==
	I0722 00:36:30.064165       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 00:36:30.064241       1 policy_source.go:224] refreshing policies
	I0722 00:36:30.068196       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 00:36:30.069063       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0722 00:36:30.069123       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0722 00:36:30.069257       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 00:36:30.069663       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 00:36:30.078788       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0722 00:36:30.087414       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 00:36:30.099563       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 00:36:30.099748       1 aggregator.go:171] initial CRD sync complete...
	I0722 00:36:30.099768       1 autoregister_controller.go:144] Starting autoregister controller
	I0722 00:36:30.099774       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 00:36:30.099779       1 cache.go:39] Caches are synced for autoregister controller
	E0722 00:36:30.101625       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 00:36:30.135007       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 00:36:30.155150       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 00:36:30.884036       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 00:36:32.145274       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 00:36:32.171529       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:36:32.214546       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:36:32.311350       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 00:36:32.329536       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 00:36:33.659944       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 00:36:34.363893       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c] <==
	I0722 00:36:10.673819       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0722 00:36:10.673842       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0722 00:36:10.673864       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0722 00:36:10.673938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-921436"
	I0722 00:36:10.678862       1 shared_informer.go:320] Caches are synced for ephemeral
	I0722 00:36:10.692896       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0722 00:36:10.693010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0722 00:36:10.693097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="137.097µs"
	I0722 00:36:10.693139       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0722 00:36:10.693188       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 00:36:10.693205       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 00:36:10.693264       1 shared_informer.go:320] Caches are synced for persistent volume
	I0722 00:36:10.693371       1 shared_informer.go:320] Caches are synced for disruption
	I0722 00:36:10.702922       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:36:10.703002       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:36:10.704094       1 shared_informer.go:320] Caches are synced for service account
	I0722 00:36:10.706278       1 shared_informer.go:320] Caches are synced for stateful set
	I0722 00:36:10.710585       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0722 00:36:10.712942       1 shared_informer.go:320] Caches are synced for GC
	I0722 00:36:10.714145       1 shared_informer.go:320] Caches are synced for job
	I0722 00:36:10.722740       1 shared_informer.go:320] Caches are synced for PVC protection
	I0722 00:36:10.725019       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0722 00:36:10.725110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-921436"
	I0722 00:36:10.739086       1 shared_informer.go:320] Caches are synced for namespace
	I0722 00:36:10.749233       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [830ca7c3c3ffe76035545341707a460c981f4efa2712d948e1e97d2510a43dbf] <==
	I0722 00:36:34.126775       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0722 00:36:34.130580       1 shared_informer.go:320] Caches are synced for TTL
	I0722 00:36:34.135030       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 00:36:34.155773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0722 00:36:34.171423       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0722 00:36:34.171661       1 shared_informer.go:320] Caches are synced for daemon sets
	I0722 00:36:34.181075       1 shared_informer.go:320] Caches are synced for node
	I0722 00:36:34.181178       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0722 00:36:34.181218       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0722 00:36:34.181241       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0722 00:36:34.181263       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0722 00:36:34.181359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-921436"
	I0722 00:36:34.250340       1 shared_informer.go:320] Caches are synced for namespace
	I0722 00:36:34.271055       1 shared_informer.go:320] Caches are synced for service account
	I0722 00:36:34.284730       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:36:34.295132       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 00:36:34.304589       1 shared_informer.go:320] Caches are synced for crt configmap
	I0722 00:36:34.311009       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 00:36:34.322246       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0722 00:36:34.326577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 00:36:34.326620       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 00:36:35.030581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="15.746341ms"
	I0722 00:36:35.030866       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="49.875µs"
	I0722 00:36:35.071452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="25.832469ms"
	I0722 00:36:35.072195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="44.685µs"
	
	
	==> kube-proxy [862447162e746017ba1bc7bff22c09095694c7ef2abb969be01ba3464d372675] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 00:36:31.390759       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 00:36:31.405769       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	E0722 00:36:31.405873       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 00:36:31.443621       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 00:36:31.443691       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:36:31.443727       1 server_linux.go:170] "Using iptables Proxier"
	I0722 00:36:31.446261       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 00:36:31.446655       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 00:36:31.446685       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:36:31.449193       1 config.go:197] "Starting service config controller"
	I0722 00:36:31.449225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:36:31.449274       1 config.go:104] "Starting endpoint slice config controller"
	I0722 00:36:31.449297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:36:31.450668       1 config.go:326] "Starting node config controller"
	I0722 00:36:31.450722       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:36:31.550173       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:36:31.550445       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:36:31.552678       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 00:36:04.234176       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 00:36:06.296423       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	E0722 00:36:06.296564       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 00:36:06.398139       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 00:36:06.400037       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:36:06.400456       1 server_linux.go:170] "Using iptables Proxier"
	I0722 00:36:06.405319       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 00:36:06.405631       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 00:36:06.405703       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:36:06.407548       1 config.go:326] "Starting node config controller"
	I0722 00:36:06.407628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:36:06.408002       1 config.go:197] "Starting service config controller"
	I0722 00:36:06.408022       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:36:06.408065       1 config.go:104] "Starting endpoint slice config controller"
	I0722 00:36:06.408081       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:36:06.508250       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:36:06.508828       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:36:06.508858       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3470abef8b59b4ca880e10cbbbd9247385c0c7cbb1d38e4b4637f0f08d788d62] <==
	W0722 00:36:30.047866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:36:30.050932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.047916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:36:30.051027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:36:30.051084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048103       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:36:30.051147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 00:36:30.051205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:36:30.051263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:36:30.051324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:36:30.051380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:36:30.051436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:36:30.051493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:36:30.051549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:36:30.048523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:36:30.051610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0722 00:36:31.371203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0] <==
	I0722 00:36:04.264345       1 serving.go:386] Generated self-signed cert in-memory
	W0722 00:36:06.235228       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 00:36:06.235407       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:36:06.235442       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 00:36:06.235505       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 00:36:06.286531       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0722 00:36:06.286570       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:36:06.289566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:36:06.289605       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:36:06.292096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:36:06.293500       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0722 00:36:06.390316       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:36:14.051486       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0722 00:36:14.051546       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0722 00:36:14.051658       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.043344    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/896328ba122dbf462ab053163249509b-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-921436\" (UID: \"896328ba122dbf462ab053163249509b\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.043360    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48527c4b511e5eadb25a462cd2c92623-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-921436\" (UID: \"48527c4b511e5eadb25a462cd2c92623\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.043375    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48527c4b511e5eadb25a462cd2c92623-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-921436\" (UID: \"48527c4b511e5eadb25a462cd2c92623\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.043396    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb5896fd0f5347aa8d3d434964b50799-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-921436\" (UID: \"cb5896fd0f5347aa8d3d434964b50799\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.104318    3686 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: E0722 00:36:26.105306    3686 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.95:8443: connect: connection refused" node="kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.233400    3686 scope.go:117] "RemoveContainer" containerID="ca6283be721f56c4e202ba9e790b5faff1c105e86b055741b7a5a952e602fcd0"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.233644    3686 scope.go:117] "RemoveContainer" containerID="3f94fc6f55e64384984d95aa31ad5d60dae783efc9b88fb5da6e7019df65527c"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.236017    3686 scope.go:117] "RemoveContainer" containerID="c24efa263783d4fdfb97e3364e5e9fed367f2fa7badf15b3d51367df38c476dc"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.237056    3686 scope.go:117] "RemoveContainer" containerID="7e819aa4298d2aee930dca5c03ec1cf7b471c5c34f65f92d79cce4047d962362"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: E0722 00:36:26.406567    3686 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-921436?timeout=10s\": dial tcp 192.168.39.95:8443: connect: connection refused" interval="800ms"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:26.507591    3686 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-921436"
	Jul 22 00:36:26 kubernetes-upgrade-921436 kubelet[3686]: E0722 00:36:26.508466    3686 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.95:8443: connect: connection refused" node="kubernetes-upgrade-921436"
	Jul 22 00:36:27 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:27.310349    3686 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-921436"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.149166    3686 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-921436"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.149674    3686 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-921436"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.149780    3686 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.156043    3686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.782346    3686 apiserver.go:52] "Watching apiserver"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.806602    3686 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.850373    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96d4bbc4-8af7-4015-bb7b-0241af992fce-xtables-lock\") pod \"kube-proxy-7jks8\" (UID: \"96d4bbc4-8af7-4015-bb7b-0241af992fce\") " pod="kube-system/kube-proxy-7jks8"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.852691    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96d4bbc4-8af7-4015-bb7b-0241af992fce-lib-modules\") pod \"kube-proxy-7jks8\" (UID: \"96d4bbc4-8af7-4015-bb7b-0241af992fce\") " pod="kube-system/kube-proxy-7jks8"
	Jul 22 00:36:30 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:30.853750    3686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f96b52a-7290-47ff-9c72-ee7a6308aeea-tmp\") pod \"storage-provisioner\" (UID: \"9f96b52a-7290-47ff-9c72-ee7a6308aeea\") " pod="kube-system/storage-provisioner"
	Jul 22 00:36:31 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:31.092444    3686 scope.go:117] "RemoveContainer" containerID="d0531c3109410aba51a6b218ac6ff9e9a8cbd646366d5f6968c0601b15aff454"
	Jul 22 00:36:31 kubernetes-upgrade-921436 kubelet[3686]: I0722 00:36:31.094287    3686 scope.go:117] "RemoveContainer" containerID="fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf"
	
	
	==> storage-provisioner [fa312fe1594487141a028e46338e19a0476a4de66ff35001bf61b3c598e8f920] <==
	I0722 00:36:31.246898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:36:31.272575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:36:31.272664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [fa815bfb10712c4019ff6ce616ad0dca988246479b73bf43bb491a3ffa0b74cf] <==
	goroutine 84 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000430b50, 0x3)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000430b40)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00043b560, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00048e500, 0x18e5530, 0xc000138cc0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001fbc20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001fbc20, 0x18b3d60, 0xc00038c8d0, 0x1, 0xc00014a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001fbc20, 0x3b9aca00, 0x0, 0x1, 0xc00014a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001fbc20, 0x3b9aca00, 0xc00014a1e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	I0722 00:36:25.135233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-921436_edb17d9f-26c3-478f-b52c-b73bb8afa9bc stopped leading
	E0722 00:36:25.135806       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:".17e461e2ff2ca4f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"LeaderElection", Message:"kubernetes-upgrade-921436_edb17d9f-26c3-478f-b52c-b73bb8afa9bc stopped leading", Source:v1.EventSource{Component:"k8s.io/minikube-hostpath_kubernetes-upgrade-921436_edb17d9f-26c3-478f-b52c-b73bb8afa9bc", Host:""}, Firs
tTimestamp:v1.Time{Time:time.Time{wall:0xc19f884247fc8af9, ext:21207436410, loc:(*time.Location)(0x220dce0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc19f884247fc8af9, ext:21207436410, loc:(*time.Location)(0x220dce0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.96.0.1:443/api/v1/namespaces/default/events": dial tcp 10.96.0.1:443: connect: connection refused'(may retry after sleeping)
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:36:34.803401   55149 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-5094/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-921436 -n kubernetes-upgrade-921436
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-921436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-921436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-921436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-921436: (1.107225815s)
--- FAIL: TestKubernetesUpgrade (381.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m46.94692258s)

                                                
                                                
-- stdout --
	* [old-k8s-version-366657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-366657" primary control-plane node in "old-k8s-version-366657" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:40:14.864197   63788 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:40:14.864513   63788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:40:14.864525   63788 out.go:304] Setting ErrFile to fd 2...
	I0722 00:40:14.864531   63788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:40:14.864725   63788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:40:14.865293   63788 out.go:298] Setting JSON to false
	I0722 00:40:14.866376   63788 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4959,"bootTime":1721603856,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:40:14.866436   63788 start.go:139] virtualization: kvm guest
	I0722 00:40:14.868653   63788 out.go:177] * [old-k8s-version-366657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:40:14.870295   63788 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:40:14.870292   63788 notify.go:220] Checking for updates...
	I0722 00:40:14.872779   63788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:40:14.873887   63788 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:40:14.875037   63788 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:40:14.876222   63788 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:40:14.877484   63788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:40:14.879238   63788 config.go:182] Loaded profile config "bridge-280040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:40:14.879333   63788 config.go:182] Loaded profile config "enable-default-cni-280040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:40:14.879409   63788 config.go:182] Loaded profile config "flannel-280040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:40:14.879487   63788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:40:14.921163   63788 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 00:40:14.922456   63788 start.go:297] selected driver: kvm2
	I0722 00:40:14.922474   63788 start.go:901] validating driver "kvm2" against <nil>
	I0722 00:40:14.922488   63788 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:40:14.923481   63788 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:40:14.923579   63788 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:40:14.938987   63788 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:40:14.939032   63788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:40:14.939398   63788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:40:14.939434   63788 cni.go:84] Creating CNI manager for ""
	I0722 00:40:14.939444   63788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:40:14.939453   63788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 00:40:14.939561   63788 start.go:340] cluster config:
	{Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:40:14.939712   63788 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:40:14.942346   63788 out.go:177] * Starting "old-k8s-version-366657" primary control-plane node in "old-k8s-version-366657" cluster
	I0722 00:40:14.943554   63788 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:40:14.943594   63788 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 00:40:14.943611   63788 cache.go:56] Caching tarball of preloaded images
	I0722 00:40:14.943685   63788 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:40:14.943698   63788 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 00:40:14.943795   63788 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:40:14.943818   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json: {Name:mk16f7a42e5061d5b8c8cdf0caa84ebda221b2bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:40:14.943986   63788 start.go:360] acquireMachinesLock for old-k8s-version-366657: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:40:36.411036   63788 start.go:364] duration metric: took 21.467012118s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:40:36.411098   63788 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:40:36.411205   63788 start.go:125] createHost starting for "" (driver="kvm2")
	I0722 00:40:36.413133   63788 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:40:36.413357   63788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:40:36.413403   63788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:40:36.430790   63788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0722 00:40:36.431218   63788 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:40:36.431791   63788 main.go:141] libmachine: Using API Version  1
	I0722 00:40:36.431812   63788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:40:36.432192   63788 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:40:36.432402   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:40:36.432616   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:36.432910   63788 start.go:159] libmachine.API.Create for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:40:36.432964   63788 client.go:168] LocalClient.Create starting
	I0722 00:40:36.433012   63788 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem
	I0722 00:40:36.433067   63788 main.go:141] libmachine: Decoding PEM data...
	I0722 00:40:36.433086   63788 main.go:141] libmachine: Parsing certificate...
	I0722 00:40:36.433152   63788 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem
	I0722 00:40:36.433182   63788 main.go:141] libmachine: Decoding PEM data...
	I0722 00:40:36.433207   63788 main.go:141] libmachine: Parsing certificate...
	I0722 00:40:36.433233   63788 main.go:141] libmachine: Running pre-create checks...
	I0722 00:40:36.433261   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .PreCreateCheck
	I0722 00:40:36.433643   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:40:36.434082   63788 main.go:141] libmachine: Creating machine...
	I0722 00:40:36.434096   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .Create
	I0722 00:40:36.434220   63788 main.go:141] libmachine: (old-k8s-version-366657) Creating KVM machine...
	I0722 00:40:36.435661   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found existing default KVM network
	I0722 00:40:36.436916   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:36.436757   65055 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015720}
	I0722 00:40:36.436942   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | created network xml: 
	I0722 00:40:36.436957   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | <network>
	I0722 00:40:36.436971   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   <name>mk-old-k8s-version-366657</name>
	I0722 00:40:36.436980   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   <dns enable='no'/>
	I0722 00:40:36.436987   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   
	I0722 00:40:36.436997   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0722 00:40:36.437008   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |     <dhcp>
	I0722 00:40:36.437018   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0722 00:40:36.437031   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |     </dhcp>
	I0722 00:40:36.437040   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   </ip>
	I0722 00:40:36.437052   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG |   
	I0722 00:40:36.437074   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | </network>
	I0722 00:40:36.437083   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | 
	I0722 00:40:36.442711   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | trying to create private KVM network mk-old-k8s-version-366657 192.168.39.0/24...
	I0722 00:40:36.512910   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | private KVM network mk-old-k8s-version-366657 192.168.39.0/24 created
	I0722 00:40:36.512962   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:36.512881   65055 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:40:36.512987   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting up store path in /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657 ...
	I0722 00:40:36.513015   63788 main.go:141] libmachine: (old-k8s-version-366657) Building disk image from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 00:40:36.513034   63788 main.go:141] libmachine: (old-k8s-version-366657) Downloading /home/jenkins/minikube-integration/19312-5094/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:40:36.744942   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:36.744803   65055 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa...
	I0722 00:40:36.862292   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:36.862149   65055 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/old-k8s-version-366657.rawdisk...
	I0722 00:40:36.862326   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Writing magic tar header
	I0722 00:40:36.862341   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Writing SSH key tar header
	I0722 00:40:36.862354   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:36.862315   65055 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657 ...
	I0722 00:40:36.862513   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657
	I0722 00:40:36.862548   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657 (perms=drwx------)
	I0722 00:40:36.862568   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube/machines
	I0722 00:40:36.862583   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube/machines (perms=drwxr-xr-x)
	I0722 00:40:36.862595   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:40:36.862621   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-5094
	I0722 00:40:36.862636   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094/.minikube (perms=drwxr-xr-x)
	I0722 00:40:36.862649   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0722 00:40:36.862664   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home/jenkins
	I0722 00:40:36.862675   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Checking permissions on dir: /home
	I0722 00:40:36.862688   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Skipping /home - not owner
	I0722 00:40:36.862714   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins/minikube-integration/19312-5094 (perms=drwxrwxr-x)
	I0722 00:40:36.862744   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0722 00:40:36.862777   63788 main.go:141] libmachine: (old-k8s-version-366657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0722 00:40:36.862797   63788 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:40:36.863768   63788 main.go:141] libmachine: (old-k8s-version-366657) define libvirt domain using xml: 
	I0722 00:40:36.863784   63788 main.go:141] libmachine: (old-k8s-version-366657) <domain type='kvm'>
	I0722 00:40:36.863790   63788 main.go:141] libmachine: (old-k8s-version-366657)   <name>old-k8s-version-366657</name>
	I0722 00:40:36.863798   63788 main.go:141] libmachine: (old-k8s-version-366657)   <memory unit='MiB'>2200</memory>
	I0722 00:40:36.863804   63788 main.go:141] libmachine: (old-k8s-version-366657)   <vcpu>2</vcpu>
	I0722 00:40:36.863811   63788 main.go:141] libmachine: (old-k8s-version-366657)   <features>
	I0722 00:40:36.863817   63788 main.go:141] libmachine: (old-k8s-version-366657)     <acpi/>
	I0722 00:40:36.863823   63788 main.go:141] libmachine: (old-k8s-version-366657)     <apic/>
	I0722 00:40:36.863828   63788 main.go:141] libmachine: (old-k8s-version-366657)     <pae/>
	I0722 00:40:36.863842   63788 main.go:141] libmachine: (old-k8s-version-366657)     
	I0722 00:40:36.863850   63788 main.go:141] libmachine: (old-k8s-version-366657)   </features>
	I0722 00:40:36.863855   63788 main.go:141] libmachine: (old-k8s-version-366657)   <cpu mode='host-passthrough'>
	I0722 00:40:36.863860   63788 main.go:141] libmachine: (old-k8s-version-366657)   
	I0722 00:40:36.863864   63788 main.go:141] libmachine: (old-k8s-version-366657)   </cpu>
	I0722 00:40:36.863883   63788 main.go:141] libmachine: (old-k8s-version-366657)   <os>
	I0722 00:40:36.863900   63788 main.go:141] libmachine: (old-k8s-version-366657)     <type>hvm</type>
	I0722 00:40:36.863908   63788 main.go:141] libmachine: (old-k8s-version-366657)     <boot dev='cdrom'/>
	I0722 00:40:36.863920   63788 main.go:141] libmachine: (old-k8s-version-366657)     <boot dev='hd'/>
	I0722 00:40:36.863930   63788 main.go:141] libmachine: (old-k8s-version-366657)     <bootmenu enable='no'/>
	I0722 00:40:36.863942   63788 main.go:141] libmachine: (old-k8s-version-366657)   </os>
	I0722 00:40:36.863953   63788 main.go:141] libmachine: (old-k8s-version-366657)   <devices>
	I0722 00:40:36.863966   63788 main.go:141] libmachine: (old-k8s-version-366657)     <disk type='file' device='cdrom'>
	I0722 00:40:36.863997   63788 main.go:141] libmachine: (old-k8s-version-366657)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/boot2docker.iso'/>
	I0722 00:40:36.864012   63788 main.go:141] libmachine: (old-k8s-version-366657)       <target dev='hdc' bus='scsi'/>
	I0722 00:40:36.864023   63788 main.go:141] libmachine: (old-k8s-version-366657)       <readonly/>
	I0722 00:40:36.864032   63788 main.go:141] libmachine: (old-k8s-version-366657)     </disk>
	I0722 00:40:36.864046   63788 main.go:141] libmachine: (old-k8s-version-366657)     <disk type='file' device='disk'>
	I0722 00:40:36.864060   63788 main.go:141] libmachine: (old-k8s-version-366657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0722 00:40:36.864076   63788 main.go:141] libmachine: (old-k8s-version-366657)       <source file='/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/old-k8s-version-366657.rawdisk'/>
	I0722 00:40:36.864085   63788 main.go:141] libmachine: (old-k8s-version-366657)       <target dev='hda' bus='virtio'/>
	I0722 00:40:36.864095   63788 main.go:141] libmachine: (old-k8s-version-366657)     </disk>
	I0722 00:40:36.864108   63788 main.go:141] libmachine: (old-k8s-version-366657)     <interface type='network'>
	I0722 00:40:36.864124   63788 main.go:141] libmachine: (old-k8s-version-366657)       <source network='mk-old-k8s-version-366657'/>
	I0722 00:40:36.864136   63788 main.go:141] libmachine: (old-k8s-version-366657)       <model type='virtio'/>
	I0722 00:40:36.864149   63788 main.go:141] libmachine: (old-k8s-version-366657)     </interface>
	I0722 00:40:36.864163   63788 main.go:141] libmachine: (old-k8s-version-366657)     <interface type='network'>
	I0722 00:40:36.864187   63788 main.go:141] libmachine: (old-k8s-version-366657)       <source network='default'/>
	I0722 00:40:36.864207   63788 main.go:141] libmachine: (old-k8s-version-366657)       <model type='virtio'/>
	I0722 00:40:36.864217   63788 main.go:141] libmachine: (old-k8s-version-366657)     </interface>
	I0722 00:40:36.864225   63788 main.go:141] libmachine: (old-k8s-version-366657)     <serial type='pty'>
	I0722 00:40:36.864238   63788 main.go:141] libmachine: (old-k8s-version-366657)       <target port='0'/>
	I0722 00:40:36.864249   63788 main.go:141] libmachine: (old-k8s-version-366657)     </serial>
	I0722 00:40:36.864260   63788 main.go:141] libmachine: (old-k8s-version-366657)     <console type='pty'>
	I0722 00:40:36.864272   63788 main.go:141] libmachine: (old-k8s-version-366657)       <target type='serial' port='0'/>
	I0722 00:40:36.864284   63788 main.go:141] libmachine: (old-k8s-version-366657)     </console>
	I0722 00:40:36.864300   63788 main.go:141] libmachine: (old-k8s-version-366657)     <rng model='virtio'>
	I0722 00:40:36.864320   63788 main.go:141] libmachine: (old-k8s-version-366657)       <backend model='random'>/dev/random</backend>
	I0722 00:40:36.864341   63788 main.go:141] libmachine: (old-k8s-version-366657)     </rng>
	I0722 00:40:36.864355   63788 main.go:141] libmachine: (old-k8s-version-366657)     
	I0722 00:40:36.864365   63788 main.go:141] libmachine: (old-k8s-version-366657)     
	I0722 00:40:36.864379   63788 main.go:141] libmachine: (old-k8s-version-366657)   </devices>
	I0722 00:40:36.864390   63788 main.go:141] libmachine: (old-k8s-version-366657) </domain>
	I0722 00:40:36.864406   63788 main.go:141] libmachine: (old-k8s-version-366657) 
	I0722 00:40:36.868789   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:f3:7a:4d in network default
	I0722 00:40:36.869439   63788 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:40:36.869469   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:36.870150   63788 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:40:36.870416   63788 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:40:36.870926   63788 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:40:36.871620   63788 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:40:38.147356   63788 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:40:38.148343   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:38.148906   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:38.148936   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:38.148876   65055 retry.go:31] will retry after 211.9889ms: waiting for machine to come up
	I0722 00:40:38.362549   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:38.363259   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:38.363307   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:38.363229   65055 retry.go:31] will retry after 327.916518ms: waiting for machine to come up
	I0722 00:40:38.692940   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:38.693538   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:38.693567   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:38.693504   65055 retry.go:31] will retry after 470.074949ms: waiting for machine to come up
	I0722 00:40:39.165829   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:39.166975   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:39.167003   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:39.166925   65055 retry.go:31] will retry after 495.356657ms: waiting for machine to come up
	I0722 00:40:39.663591   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:39.664240   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:39.664269   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:39.664196   65055 retry.go:31] will retry after 605.986011ms: waiting for machine to come up
	I0722 00:40:40.272357   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:40.272870   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:40.272918   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:40.272821   65055 retry.go:31] will retry after 650.557396ms: waiting for machine to come up
	I0722 00:40:40.924766   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:40.925366   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:40.925435   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:40.925309   65055 retry.go:31] will retry after 1.190552524s: waiting for machine to come up
	I0722 00:40:42.117701   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:42.118196   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:42.118219   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:42.118159   65055 retry.go:31] will retry after 1.088402701s: waiting for machine to come up
	I0722 00:40:43.207780   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:43.208325   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:43.208352   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:43.208262   65055 retry.go:31] will retry after 1.272021705s: waiting for machine to come up
	I0722 00:40:44.481511   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:44.481971   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:44.481995   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:44.481940   65055 retry.go:31] will retry after 2.325038205s: waiting for machine to come up
	I0722 00:40:46.808374   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:46.808885   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:46.808937   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:46.808845   65055 retry.go:31] will retry after 2.744467052s: waiting for machine to come up
	I0722 00:40:49.556838   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:49.557307   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:49.557346   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:49.557273   65055 retry.go:31] will retry after 2.322385755s: waiting for machine to come up
	I0722 00:40:51.881882   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:51.882482   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:40:51.882512   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:40:51.882431   65055 retry.go:31] will retry after 3.850434721s: waiting for machine to come up
	I0722 00:40:55.734033   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.734626   63788 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:40:55.734652   63788 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:40:55.734678   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.735018   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657
	I0722 00:40:55.808835   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:40:55.808861   63788 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:40:55.808877   63788 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:40:55.811450   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.811933   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:55.811962   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.812214   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:40:55.812241   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:40:55.812287   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:40:55.812304   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:40:55.812354   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:40:55.939490   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:40:55.939809   63788 main.go:141] libmachine: (old-k8s-version-366657) KVM machine creation complete!
	I0722 00:40:55.940105   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:40:55.940691   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:55.940929   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:55.941094   63788 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 00:40:55.941107   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:40:55.942477   63788 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 00:40:55.942489   63788 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 00:40:55.942495   63788 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 00:40:55.942514   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:55.945012   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.945390   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:55.945418   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:55.945547   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:55.945762   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:55.945939   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:55.946088   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:55.946280   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:55.946485   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:55.946498   63788 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 00:40:56.049670   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:40:56.049695   63788 main.go:141] libmachine: Detecting the provisioner...
	I0722 00:40:56.049704   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.052265   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.052619   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.052661   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.052829   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.053011   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.053153   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.053297   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.053448   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:56.053640   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:56.053656   63788 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 00:40:56.159146   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 00:40:56.159206   63788 main.go:141] libmachine: found compatible host: buildroot
	I0722 00:40:56.159212   63788 main.go:141] libmachine: Provisioning with buildroot...
	I0722 00:40:56.159219   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:40:56.159499   63788 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:40:56.159527   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:40:56.159723   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.162313   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.162729   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.162759   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.162887   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.163045   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.163220   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.163361   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.163512   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:56.163692   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:56.163705   63788 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:40:56.284165   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:40:56.284200   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.286857   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.287231   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.287260   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.287412   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.287574   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.287694   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.287861   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.288062   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:56.288254   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:56.288282   63788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:40:56.399377   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:40:56.399405   63788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:40:56.399445   63788 buildroot.go:174] setting up certificates
	I0722 00:40:56.399458   63788 provision.go:84] configureAuth start
	I0722 00:40:56.399471   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:40:56.399774   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:40:56.402835   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.403259   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.403299   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.403455   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.405653   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.406044   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.406073   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.406339   63788 provision.go:143] copyHostCerts
	I0722 00:40:56.406410   63788 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:40:56.406434   63788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:40:56.406508   63788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:40:56.406635   63788 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:40:56.406646   63788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:40:56.406681   63788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:40:56.406755   63788 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:40:56.406764   63788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:40:56.406791   63788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:40:56.406853   63788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:40:56.535086   63788 provision.go:177] copyRemoteCerts
	I0722 00:40:56.535140   63788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:40:56.535162   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.537860   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.538246   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.538268   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.538477   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.538695   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.538858   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.539000   63788 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:40:56.624852   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:40:56.649634   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:40:56.672645   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:40:56.695355   63788 provision.go:87] duration metric: took 295.884192ms to configureAuth
	I0722 00:40:56.695381   63788 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:40:56.695564   63788 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:40:56.695661   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.698386   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.698828   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.698854   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.699128   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.699342   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.699514   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.699650   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.699898   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:56.700076   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:56.700098   63788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:40:56.962853   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:40:56.962885   63788 main.go:141] libmachine: Checking connection to Docker...
	I0722 00:40:56.962896   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetURL
	I0722 00:40:56.964074   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using libvirt version 6000000
	I0722 00:40:56.966386   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.966745   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.966770   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.966946   63788 main.go:141] libmachine: Docker is up and running!
	I0722 00:40:56.966962   63788 main.go:141] libmachine: Reticulating splines...
	I0722 00:40:56.966970   63788 client.go:171] duration metric: took 20.533992907s to LocalClient.Create
	I0722 00:40:56.966998   63788 start.go:167] duration metric: took 20.534089821s to libmachine.API.Create "old-k8s-version-366657"
	I0722 00:40:56.967005   63788 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:40:56.967029   63788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:40:56.967044   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:56.967249   63788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:40:56.967275   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:56.969232   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.969511   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:56.969537   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:56.969693   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:56.969875   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:56.970045   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:56.970165   63788 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:40:57.053628   63788 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:40:57.057756   63788 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:40:57.057774   63788 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:40:57.057831   63788 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:40:57.057915   63788 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:40:57.058037   63788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:40:57.069402   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:40:57.093365   63788 start.go:296] duration metric: took 126.346333ms for postStartSetup
	I0722 00:40:57.093418   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:40:57.093992   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:40:57.096796   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.097261   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:57.097286   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.097538   63788 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:40:57.097772   63788 start.go:128] duration metric: took 20.686554918s to createHost
	I0722 00:40:57.097800   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:57.100213   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.100580   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:57.100613   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.100767   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:57.100987   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:57.101170   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:57.101378   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:57.101566   63788 main.go:141] libmachine: Using SSH client type: native
	I0722 00:40:57.101763   63788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:40:57.101776   63788 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 00:40:57.207150   63788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608857.158617630
	
	I0722 00:40:57.207168   63788 fix.go:216] guest clock: 1721608857.158617630
	I0722 00:40:57.207175   63788 fix.go:229] Guest: 2024-07-22 00:40:57.15861763 +0000 UTC Remote: 2024-07-22 00:40:57.097786982 +0000 UTC m=+42.277896182 (delta=60.830648ms)
	I0722 00:40:57.207193   63788 fix.go:200] guest clock delta is within tolerance: 60.830648ms
	I0722 00:40:57.207198   63788 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 20.796135961s
	I0722 00:40:57.207235   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:57.207595   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:40:57.210478   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.210960   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:57.211002   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.211122   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:57.211641   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:57.211812   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:40:57.211915   63788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:40:57.211952   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:57.212013   63788 ssh_runner.go:195] Run: cat /version.json
	I0722 00:40:57.212037   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:40:57.215028   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.215154   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.215536   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:57.215768   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:40:57.215774   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:57.215843   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:57.215891   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.215921   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:57.215937   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:57.215944   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:40:57.215955   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:57.216213   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:40:57.216248   63788 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:40:57.216342   63788 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:40:57.337972   63788 ssh_runner.go:195] Run: systemctl --version
	I0722 00:40:57.344804   63788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:40:57.523322   63788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:40:57.529890   63788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:40:57.529984   63788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:40:57.545936   63788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:40:57.545967   63788 start.go:495] detecting cgroup driver to use...
	I0722 00:40:57.546041   63788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:40:57.563935   63788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:40:57.578313   63788 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:40:57.578386   63788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:40:57.593182   63788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:40:57.607092   63788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:40:57.726412   63788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:40:57.908921   63788 docker.go:233] disabling docker service ...
	I0722 00:40:57.909011   63788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:40:57.934152   63788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:40:57.946879   63788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:40:58.078791   63788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:40:58.201628   63788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:40:58.214938   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:40:58.235222   63788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:40:58.235289   63788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:40:58.245492   63788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:40:58.245583   63788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:40:58.256617   63788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:40:58.266335   63788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:40:58.276027   63788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:40:58.286091   63788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:40:58.295387   63788 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:40:58.295435   63788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:40:58.307075   63788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:40:58.320169   63788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:40:58.448242   63788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:40:58.634182   63788 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:40:58.634268   63788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:40:58.640090   63788 start.go:563] Will wait 60s for crictl version
	I0722 00:40:58.640150   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:40:58.644897   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:40:58.692031   63788 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:40:58.692118   63788 ssh_runner.go:195] Run: crio --version
	I0722 00:40:58.722174   63788 ssh_runner.go:195] Run: crio --version
	I0722 00:40:58.755311   63788 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:40:58.756550   63788 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:40:58.760209   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:58.760660   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:40:58.760696   63788 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:40:58.760924   63788 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:40:58.765192   63788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:40:58.777306   63788 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:40:58.777460   63788 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:40:58.777524   63788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:40:58.808074   63788 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:40:58.808149   63788 ssh_runner.go:195] Run: which lz4
	I0722 00:40:58.811978   63788 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 00:40:58.816155   63788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:40:58.816190   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:41:00.471709   63788 crio.go:462] duration metric: took 1.659758315s to copy over tarball
	I0722 00:41:00.471845   63788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:41:03.244166   63788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.772286838s)
	I0722 00:41:03.244194   63788 crio.go:469] duration metric: took 2.772428037s to extract the tarball
	I0722 00:41:03.244202   63788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:41:03.289838   63788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:41:03.336689   63788 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:41:03.336710   63788 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:41:03.336783   63788 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:41:03.336803   63788 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:41:03.336812   63788 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:41:03.336787   63788 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:41:03.336838   63788 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:41:03.336858   63788 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:41:03.336813   63788 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:41:03.336787   63788 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:41:03.338270   63788 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:41:03.338359   63788 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:41:03.338489   63788 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:41:03.338532   63788 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:41:03.338536   63788 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:41:03.338274   63788 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:41:03.338698   63788 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:41:03.338835   63788 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:41:03.571117   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:41:03.582350   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:41:03.594334   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:41:03.603346   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:41:03.609233   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:41:03.611784   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:41:03.639242   63788 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:41:03.639302   63788 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:41:03.639362   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.685065   63788 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:41:03.685118   63788 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:41:03.685168   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.730724   63788 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:41:03.730789   63788 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:41:03.730849   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.738132   63788 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:41:03.738167   63788 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:41:03.738177   63788 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:41:03.738197   63788 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:41:03.738219   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.738235   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.738236   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:41:03.738265   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:41:03.738136   63788 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:41:03.738203   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:41:03.738284   63788 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:41:03.738306   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.756180   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:41:03.840864   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:41:03.840917   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:41:03.840971   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:41:03.840982   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:41:03.841022   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:41:03.841060   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:41:03.850157   63788 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:41:03.850200   63788 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:41:03.850249   63788 ssh_runner.go:195] Run: which crictl
	I0722 00:41:03.928895   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:41:03.928972   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:41:03.929024   63788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:41:03.929093   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:41:03.962403   63788 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:41:04.219751   63788 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:41:04.372364   63788 cache_images.go:92] duration metric: took 1.035636633s to LoadCachedImages
	W0722 00:41:04.372457   63788 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0722 00:41:04.372473   63788 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:41:04.372623   63788 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:41:04.372702   63788 ssh_runner.go:195] Run: crio config
	I0722 00:41:04.433786   63788 cni.go:84] Creating CNI manager for ""
	I0722 00:41:04.433813   63788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:41:04.433828   63788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:41:04.433853   63788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:41:04.434023   63788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:41:04.434095   63788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:41:04.444398   63788 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:41:04.444486   63788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:41:04.454923   63788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:41:04.471443   63788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:41:04.490245   63788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:41:04.510154   63788 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:41:04.514505   63788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:41:04.531169   63788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:41:04.655115   63788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:41:04.671926   63788 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:41:04.671953   63788 certs.go:194] generating shared ca certs ...
	I0722 00:41:04.671972   63788 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:04.672195   63788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:41:04.672253   63788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:41:04.672267   63788 certs.go:256] generating profile certs ...
	I0722 00:41:04.672340   63788 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:41:04.672370   63788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.crt with IP's: []
	I0722 00:41:04.722033   63788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.crt ...
	I0722 00:41:04.722059   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.crt: {Name:mkeef7389eea3e98c1af995c4d622042ada7b12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:04.722254   63788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key ...
	I0722 00:41:04.722277   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key: {Name:mk4ae1f47f8c8751b2d68658bade7fa8c0b9f435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:04.722420   63788 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:41:04.722441   63788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt.2cc8579c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.174]
	I0722 00:41:04.888376   63788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt.2cc8579c ...
	I0722 00:41:04.888401   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt.2cc8579c: {Name:mkf0bd8121b24cc10e856ce50301bc7f1fae66b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:04.888555   63788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c ...
	I0722 00:41:04.888571   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c: {Name:mke97985dd84f9e50a214b2d218c2fbf4bb4e447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:04.888665   63788 certs.go:381] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt.2cc8579c -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt
	I0722 00:41:04.888765   63788 certs.go:385] copying /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c -> /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key
	I0722 00:41:04.888839   63788 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:41:04.888859   63788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt with IP's: []
	I0722 00:41:05.045793   63788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt ...
	I0722 00:41:05.045823   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt: {Name:mkb6b9b0c7d440aa01d75af3726386d547d278b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:05.045977   63788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key ...
	I0722 00:41:05.045990   63788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key: {Name:mk3cef6e78f2d7d8d5b5bf913dd7e963af49ad5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:41:05.046146   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:41:05.046184   63788 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:41:05.046193   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:41:05.046212   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:41:05.046235   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:41:05.046255   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:41:05.046292   63788 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:41:05.046889   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:41:05.075256   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:41:05.102201   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:41:05.125895   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:41:05.150555   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:41:05.174535   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:41:05.198664   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:41:05.227518   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:41:05.349249   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:41:05.371894   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:41:05.397553   63788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:41:05.424496   63788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:41:05.442516   63788 ssh_runner.go:195] Run: openssl version
	I0722 00:41:05.448591   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:41:05.463466   63788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:41:05.468999   63788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:41:05.469088   63788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:41:05.476663   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:41:05.491161   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:41:05.506192   63788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:41:05.512363   63788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:41:05.512431   63788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:41:05.522079   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:41:05.540490   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:41:05.559377   63788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:41:05.564423   63788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:41:05.564491   63788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:41:05.570191   63788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:41:05.584886   63788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:41:05.589594   63788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:41:05.589671   63788 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:41:05.589780   63788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:41:05.589853   63788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:41:05.632391   63788 cri.go:89] found id: ""
	I0722 00:41:05.632468   63788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:41:05.643622   63788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:41:05.653867   63788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:41:05.664705   63788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:41:05.664724   63788 kubeadm.go:157] found existing configuration files:
	
	I0722 00:41:05.664774   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:41:05.674309   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:41:05.674374   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:41:05.684533   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:41:05.693822   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:41:05.693878   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:41:05.705122   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:41:05.714727   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:41:05.714795   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:41:05.724507   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:41:05.734180   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:41:05.734242   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:41:05.746432   63788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:41:06.030230   63788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:43:04.330703   63788 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:43:04.330835   63788 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:43:04.331997   63788 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:43:04.332066   63788 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:43:04.332158   63788 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:43:04.332296   63788 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:43:04.332447   63788 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:43:04.332533   63788 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:43:04.334035   63788 out.go:204]   - Generating certificates and keys ...
	I0722 00:43:04.334118   63788 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:43:04.334194   63788 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:43:04.334273   63788 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:43:04.334353   63788 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:43:04.334446   63788 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:43:04.334523   63788 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:43:04.334599   63788 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:43:04.334793   63788 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0722 00:43:04.334875   63788 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:43:04.335036   63788 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0722 00:43:04.335137   63788 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:43:04.335218   63788 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:43:04.335288   63788 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:43:04.335372   63788 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:43:04.335445   63788 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:43:04.335519   63788 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:43:04.335605   63788 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:43:04.335682   63788 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:43:04.335817   63788 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:43:04.335919   63788 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:43:04.335973   63788 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:43:04.336061   63788 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:43:04.338189   63788 out.go:204]   - Booting up control plane ...
	I0722 00:43:04.338276   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:43:04.338360   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:43:04.338459   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:43:04.338557   63788 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:43:04.338742   63788 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:43:04.338808   63788 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:43:04.338911   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:04.339198   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:04.339262   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:04.339506   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:04.339602   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:04.339853   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:04.339914   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:04.340069   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:04.340146   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:04.340330   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:04.340341   63788 kubeadm.go:310] 
	I0722 00:43:04.340404   63788 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:43:04.340464   63788 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:43:04.340473   63788 kubeadm.go:310] 
	I0722 00:43:04.340522   63788 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:43:04.340570   63788 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:43:04.340704   63788 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:43:04.340714   63788 kubeadm.go:310] 
	I0722 00:43:04.340814   63788 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:43:04.340864   63788 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:43:04.340893   63788 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:43:04.340899   63788 kubeadm.go:310] 
	I0722 00:43:04.341023   63788 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:43:04.341123   63788 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:43:04.341132   63788 kubeadm.go:310] 
	I0722 00:43:04.341247   63788 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:43:04.341357   63788 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:43:04.341449   63788 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:43:04.341527   63788 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:43:04.341550   63788 kubeadm.go:310] 
	W0722 00:43:04.341657   63788 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-366657] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:43:04.341715   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:43:04.877939   63788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:43:04.891339   63788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:43:04.900158   63788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:43:04.900178   63788 kubeadm.go:157] found existing configuration files:
	
	I0722 00:43:04.900220   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:43:04.908509   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:43:04.908562   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:43:04.917327   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:43:04.925565   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:43:04.925624   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:43:04.934366   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:43:04.942515   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:43:04.942562   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:43:04.951628   63788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:43:04.959768   63788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:43:04.959831   63788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:43:04.968163   63788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:43:05.042731   63788 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:43:05.042862   63788 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:43:05.175659   63788 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:43:05.175777   63788 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:43:05.175874   63788 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:43:05.350147   63788 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:43:05.352066   63788 out.go:204]   - Generating certificates and keys ...
	I0722 00:43:05.352152   63788 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:43:05.352238   63788 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:43:05.352344   63788 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:43:05.352433   63788 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:43:05.352533   63788 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:43:05.352611   63788 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:43:05.352723   63788 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:43:05.352813   63788 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:43:05.352906   63788 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:43:05.352974   63788 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:43:05.353006   63788 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:43:05.353103   63788 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:43:05.615697   63788 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:43:05.812975   63788 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:43:05.922865   63788 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:43:06.057114   63788 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:43:06.071636   63788 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:43:06.072640   63788 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:43:06.072683   63788 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:43:06.203837   63788 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:43:06.205624   63788 out.go:204]   - Booting up control plane ...
	I0722 00:43:06.205749   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:43:06.213523   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:43:06.214463   63788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:43:06.215214   63788 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:43:06.217265   63788 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:43:46.217581   63788 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:43:46.218000   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:46.218280   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:43:51.218483   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:43:51.218697   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:44:01.219272   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:44:01.219499   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:44:21.220472   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:44:21.220644   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:45:01.223191   63788 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:45:01.223695   63788 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:45:01.223733   63788 kubeadm.go:310] 
	I0722 00:45:01.223830   63788 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:45:01.223924   63788 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:45:01.223933   63788 kubeadm.go:310] 
	I0722 00:45:01.224012   63788 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:45:01.224114   63788 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:45:01.224351   63788 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:45:01.224364   63788 kubeadm.go:310] 
	I0722 00:45:01.224592   63788 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:45:01.224694   63788 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:45:01.224774   63788 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:45:01.224788   63788 kubeadm.go:310] 
	I0722 00:45:01.225049   63788 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:45:01.225240   63788 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:45:01.225254   63788 kubeadm.go:310] 
	I0722 00:45:01.225653   63788 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:45:01.225879   63788 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:45:01.226125   63788 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:45:01.226511   63788 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:45:01.226526   63788 kubeadm.go:310] 
	I0722 00:45:01.226658   63788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:45:01.226746   63788 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:45:01.226829   63788 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:45:01.226902   63788 kubeadm.go:394] duration metric: took 3m55.637231397s to StartCluster
	I0722 00:45:01.226962   63788 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:45:01.227061   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:45:01.266801   63788 cri.go:89] found id: ""
	I0722 00:45:01.266826   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.266834   63788 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:45:01.266840   63788 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:45:01.266901   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:45:01.297938   63788 cri.go:89] found id: ""
	I0722 00:45:01.297966   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.297977   63788 logs.go:278] No container was found matching "etcd"
	I0722 00:45:01.297985   63788 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:45:01.298051   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:45:01.330235   63788 cri.go:89] found id: ""
	I0722 00:45:01.330259   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.330267   63788 logs.go:278] No container was found matching "coredns"
	I0722 00:45:01.330273   63788 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:45:01.330323   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:45:01.361716   63788 cri.go:89] found id: ""
	I0722 00:45:01.361742   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.361752   63788 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:45:01.361760   63788 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:45:01.361826   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:45:01.392531   63788 cri.go:89] found id: ""
	I0722 00:45:01.392556   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.392562   63788 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:45:01.392568   63788 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:45:01.392620   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:45:01.424005   63788 cri.go:89] found id: ""
	I0722 00:45:01.424033   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.424043   63788 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:45:01.424051   63788 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:45:01.424117   63788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:45:01.455084   63788 cri.go:89] found id: ""
	I0722 00:45:01.455116   63788 logs.go:276] 0 containers: []
	W0722 00:45:01.455126   63788 logs.go:278] No container was found matching "kindnet"
	I0722 00:45:01.455137   63788 logs.go:123] Gathering logs for container status ...
	I0722 00:45:01.455161   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:45:01.490701   63788 logs.go:123] Gathering logs for kubelet ...
	I0722 00:45:01.490733   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:45:01.537534   63788 logs.go:123] Gathering logs for dmesg ...
	I0722 00:45:01.537571   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:45:01.549731   63788 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:45:01.549759   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:45:01.657580   63788 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:45:01.657599   63788 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:45:01.657610   63788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:45:01.751188   63788 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:45:01.751236   63788 out.go:239] * 
	* 
	W0722 00:45:01.751298   63788 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:45:01.751335   63788 out.go:239] * 
	* 
	W0722 00:45:01.752278   63788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:45:01.755018   63788 out.go:177] 
	W0722 00:45:01.756138   63788 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:45:01.756188   63788 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:45:01.756214   63788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:45:01.757551   63788 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 6 (220.110202ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:02.021983   71017 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-366657" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-214905 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-214905 --alsologtostderr -v=3: exit status 82 (2m0.52288773s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-214905"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:42:40.062754   69242 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:40.063043   69242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:40.063057   69242 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:40.063063   69242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:40.063381   69242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:42:40.063688   69242 out.go:298] Setting JSON to false
	I0722 00:42:40.063790   69242 mustload.go:65] Loading cluster: default-k8s-diff-port-214905
	I0722 00:42:40.064321   69242 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:40.064440   69242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:42:40.064658   69242 mustload.go:65] Loading cluster: default-k8s-diff-port-214905
	I0722 00:42:40.064810   69242 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:42:40.064851   69242 stop.go:39] StopHost: default-k8s-diff-port-214905
	I0722 00:42:40.065373   69242 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:42:40.065433   69242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:42:40.083626   69242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0722 00:42:40.084260   69242 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:42:40.084969   69242 main.go:141] libmachine: Using API Version  1
	I0722 00:42:40.084991   69242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:42:40.085390   69242 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:42:40.087698   69242 out.go:177] * Stopping node "default-k8s-diff-port-214905"  ...
	I0722 00:42:40.089178   69242 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 00:42:40.089222   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:42:40.089506   69242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 00:42:40.089557   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:42:40.093006   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:42:40.093382   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:41:39 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:42:40.093405   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:42:40.093644   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:42:40.093813   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:42:40.093959   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:42:40.094157   69242 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:42:40.203263   69242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 00:42:40.263841   69242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 00:42:40.324115   69242 main.go:141] libmachine: Stopping "default-k8s-diff-port-214905"...
	I0722 00:42:40.324174   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:42:40.326382   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Stop
	I0722 00:42:40.331174   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 0/120
	I0722 00:42:41.332918   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 1/120
	I0722 00:42:42.335206   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 2/120
	I0722 00:42:43.336644   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 3/120
	I0722 00:42:44.337804   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 4/120
	I0722 00:42:45.339911   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 5/120
	I0722 00:42:46.341633   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 6/120
	I0722 00:42:47.342871   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 7/120
	I0722 00:42:48.345192   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 8/120
	I0722 00:42:49.346756   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 9/120
	I0722 00:42:50.348248   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 10/120
	I0722 00:42:51.350623   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 11/120
	I0722 00:42:52.352183   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 12/120
	I0722 00:42:53.353746   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 13/120
	I0722 00:42:54.355362   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 14/120
	I0722 00:42:55.356979   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 15/120
	I0722 00:42:56.358790   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 16/120
	I0722 00:42:57.360892   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 17/120
	I0722 00:42:58.362268   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 18/120
	I0722 00:42:59.363671   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 19/120
	I0722 00:43:00.366064   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 20/120
	I0722 00:43:01.367732   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 21/120
	I0722 00:43:02.369151   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 22/120
	I0722 00:43:03.370536   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 23/120
	I0722 00:43:04.371816   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 24/120
	I0722 00:43:05.373750   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 25/120
	I0722 00:43:06.375303   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 26/120
	I0722 00:43:07.376726   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 27/120
	I0722 00:43:08.378174   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 28/120
	I0722 00:43:09.379505   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 29/120
	I0722 00:43:10.381038   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 30/120
	I0722 00:43:11.382482   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 31/120
	I0722 00:43:12.384270   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 32/120
	I0722 00:43:13.386115   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 33/120
	I0722 00:43:14.387717   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 34/120
	I0722 00:43:15.389732   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 35/120
	I0722 00:43:16.391132   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 36/120
	I0722 00:43:17.392653   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 37/120
	I0722 00:43:18.394002   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 38/120
	I0722 00:43:19.395392   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 39/120
	I0722 00:43:20.396797   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 40/120
	I0722 00:43:21.398160   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 41/120
	I0722 00:43:22.399623   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 42/120
	I0722 00:43:23.401255   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 43/120
	I0722 00:43:24.402691   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 44/120
	I0722 00:43:25.405028   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 45/120
	I0722 00:43:26.406373   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 46/120
	I0722 00:43:27.407926   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 47/120
	I0722 00:43:28.409546   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 48/120
	I0722 00:43:29.411442   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 49/120
	I0722 00:43:30.413985   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 50/120
	I0722 00:43:31.415402   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 51/120
	I0722 00:43:32.417521   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 52/120
	I0722 00:43:33.419029   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 53/120
	I0722 00:43:34.421060   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 54/120
	I0722 00:43:35.423129   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 55/120
	I0722 00:43:36.424576   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 56/120
	I0722 00:43:37.426393   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 57/120
	I0722 00:43:38.427951   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 58/120
	I0722 00:43:39.429472   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 59/120
	I0722 00:43:40.431393   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 60/120
	I0722 00:43:41.433137   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 61/120
	I0722 00:43:42.434795   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 62/120
	I0722 00:43:43.437076   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 63/120
	I0722 00:43:44.438444   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 64/120
	I0722 00:43:45.440451   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 65/120
	I0722 00:43:46.441858   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 66/120
	I0722 00:43:47.443251   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 67/120
	I0722 00:43:48.444712   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 68/120
	I0722 00:43:49.446250   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 69/120
	I0722 00:43:50.448535   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 70/120
	I0722 00:43:51.449965   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 71/120
	I0722 00:43:52.451967   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 72/120
	I0722 00:43:53.453226   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 73/120
	I0722 00:43:54.454704   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 74/120
	I0722 00:43:55.456453   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 75/120
	I0722 00:43:56.457814   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 76/120
	I0722 00:43:57.459526   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 77/120
	I0722 00:43:58.461288   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 78/120
	I0722 00:43:59.462877   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 79/120
	I0722 00:44:00.465191   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 80/120
	I0722 00:44:01.466305   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 81/120
	I0722 00:44:02.467895   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 82/120
	I0722 00:44:03.469020   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 83/120
	I0722 00:44:04.470405   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 84/120
	I0722 00:44:05.471885   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 85/120
	I0722 00:44:06.473246   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 86/120
	I0722 00:44:07.474647   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 87/120
	I0722 00:44:08.476243   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 88/120
	I0722 00:44:09.477790   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 89/120
	I0722 00:44:10.479887   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 90/120
	I0722 00:44:11.481548   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 91/120
	I0722 00:44:12.483137   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 92/120
	I0722 00:44:13.484433   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 93/120
	I0722 00:44:14.485820   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 94/120
	I0722 00:44:15.487984   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 95/120
	I0722 00:44:16.489480   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 96/120
	I0722 00:44:17.490856   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 97/120
	I0722 00:44:18.492317   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 98/120
	I0722 00:44:19.493915   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 99/120
	I0722 00:44:20.496285   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 100/120
	I0722 00:44:21.497600   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 101/120
	I0722 00:44:22.498902   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 102/120
	I0722 00:44:23.501189   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 103/120
	I0722 00:44:24.502566   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 104/120
	I0722 00:44:25.504346   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 105/120
	I0722 00:44:26.506003   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 106/120
	I0722 00:44:27.507475   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 107/120
	I0722 00:44:28.509648   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 108/120
	I0722 00:44:29.511080   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 109/120
	I0722 00:44:30.512996   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 110/120
	I0722 00:44:31.514880   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 111/120
	I0722 00:44:32.517133   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 112/120
	I0722 00:44:33.518980   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 113/120
	I0722 00:44:34.521198   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 114/120
	I0722 00:44:35.523178   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 115/120
	I0722 00:44:36.524972   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 116/120
	I0722 00:44:37.526517   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 117/120
	I0722 00:44:38.527883   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 118/120
	I0722 00:44:39.529257   69242 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for machine to stop 119/120
	I0722 00:44:40.530769   69242 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 00:44:40.530836   69242 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 00:44:40.532491   69242 out.go:177] 
	W0722 00:44:40.533669   69242 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 00:44:40.533687   69242 out.go:239] * 
	* 
	W0722 00:44:40.536358   69242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:44:40.537525   69242 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-214905 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
E0722 00:44:42.227249   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:46.543901   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.549141   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.559388   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.579647   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.619968   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.700361   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:46.860873   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:47.181069   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:47.822238   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:49.102368   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:51.663426   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:44:52.467394   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:54.888940   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:54.894181   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:54.904476   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:54.924753   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:54.965069   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:55.045421   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:55.172901   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:44:55.206134   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:55.527287   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:56.168191   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:44:56.784066   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905: exit status 3 (18.628381091s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:44:59.166932   70860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host
	E0722 00:44:59.166970   70860 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-214905" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-945581 --alsologtostderr -v=3
E0722 00:43:01.305952   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.311249   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.321523   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.341840   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.381980   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.462307   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.622676   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:01.943462   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:02.583729   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:03.864334   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:06.424623   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:11.544867   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:21.785828   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-945581 --alsologtostderr -v=3: exit status 82 (2m0.648277317s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-945581"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:42:56.542523   69606 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:42:56.542649   69606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:56.542656   69606 out.go:304] Setting ErrFile to fd 2...
	I0722 00:42:56.542660   69606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:42:56.542862   69606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:42:56.543077   69606 out.go:298] Setting JSON to false
	I0722 00:42:56.543151   69606 mustload.go:65] Loading cluster: no-preload-945581
	I0722 00:42:56.543462   69606 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:42:56.543524   69606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:42:56.543686   69606 mustload.go:65] Loading cluster: no-preload-945581
	I0722 00:42:56.543778   69606 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:42:56.543800   69606 stop.go:39] StopHost: no-preload-945581
	I0722 00:42:56.544145   69606 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:42:56.544187   69606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:42:56.563421   69606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0722 00:42:56.563966   69606 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:42:56.564594   69606 main.go:141] libmachine: Using API Version  1
	I0722 00:42:56.564616   69606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:42:56.565004   69606 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:42:56.567677   69606 out.go:177] * Stopping node "no-preload-945581"  ...
	I0722 00:42:56.568963   69606 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 00:42:56.568994   69606 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:42:56.569235   69606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 00:42:56.569269   69606 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:42:56.572667   69606 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:42:56.573118   69606 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:41:13 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:42:56.573147   69606 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:42:56.573324   69606 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:42:56.573534   69606 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:42:56.573695   69606 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:42:56.573832   69606 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:42:56.663895   69606 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 00:42:56.721155   69606 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 00:42:56.769725   69606 main.go:141] libmachine: Stopping "no-preload-945581"...
	I0722 00:42:56.769757   69606 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:42:56.771516   69606 main.go:141] libmachine: (no-preload-945581) Calling .Stop
	I0722 00:42:56.775384   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 0/120
	I0722 00:42:57.776899   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 1/120
	I0722 00:42:58.779391   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 2/120
	I0722 00:42:59.781421   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 3/120
	I0722 00:43:00.782952   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 4/120
	I0722 00:43:01.785051   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 5/120
	I0722 00:43:02.786720   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 6/120
	I0722 00:43:03.788016   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 7/120
	I0722 00:43:04.789174   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 8/120
	I0722 00:43:05.791060   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 9/120
	I0722 00:43:06.792475   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 10/120
	I0722 00:43:07.793784   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 11/120
	I0722 00:43:08.795071   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 12/120
	I0722 00:43:09.797063   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 13/120
	I0722 00:43:10.798524   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 14/120
	I0722 00:43:11.800794   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 15/120
	I0722 00:43:12.802320   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 16/120
	I0722 00:43:13.803955   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 17/120
	I0722 00:43:14.805258   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 18/120
	I0722 00:43:15.806717   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 19/120
	I0722 00:43:16.809072   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 20/120
	I0722 00:43:17.810634   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 21/120
	I0722 00:43:18.812204   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 22/120
	I0722 00:43:19.814340   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 23/120
	I0722 00:43:20.816738   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 24/120
	I0722 00:43:21.819003   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 25/120
	I0722 00:43:22.820237   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 26/120
	I0722 00:43:23.821784   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 27/120
	I0722 00:43:24.823170   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 28/120
	I0722 00:43:25.824684   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 29/120
	I0722 00:43:26.827055   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 30/120
	I0722 00:43:27.829273   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 31/120
	I0722 00:43:28.831179   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 32/120
	I0722 00:43:29.832651   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 33/120
	I0722 00:43:30.834379   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 34/120
	I0722 00:43:32.001918   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 35/120
	I0722 00:43:33.003493   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 36/120
	I0722 00:43:34.006628   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 37/120
	I0722 00:43:35.008055   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 38/120
	I0722 00:43:36.009429   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 39/120
	I0722 00:43:37.011478   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 40/120
	I0722 00:43:38.012818   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 41/120
	I0722 00:43:39.014089   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 42/120
	I0722 00:43:40.015456   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 43/120
	I0722 00:43:41.016740   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 44/120
	I0722 00:43:42.018913   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 45/120
	I0722 00:43:43.020267   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 46/120
	I0722 00:43:44.021725   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 47/120
	I0722 00:43:45.023115   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 48/120
	I0722 00:43:46.024691   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 49/120
	I0722 00:43:47.026801   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 50/120
	I0722 00:43:48.028148   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 51/120
	I0722 00:43:49.029576   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 52/120
	I0722 00:43:50.031033   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 53/120
	I0722 00:43:51.033052   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 54/120
	I0722 00:43:52.035057   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 55/120
	I0722 00:43:53.036352   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 56/120
	I0722 00:43:54.037756   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 57/120
	I0722 00:43:55.039357   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 58/120
	I0722 00:43:56.040854   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 59/120
	I0722 00:43:57.042672   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 60/120
	I0722 00:43:58.044246   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 61/120
	I0722 00:43:59.045882   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 62/120
	I0722 00:44:00.047401   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 63/120
	I0722 00:44:01.049150   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 64/120
	I0722 00:44:02.050503   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 65/120
	I0722 00:44:03.052318   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 66/120
	I0722 00:44:04.053972   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 67/120
	I0722 00:44:05.055514   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 68/120
	I0722 00:44:06.057374   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 69/120
	I0722 00:44:07.058925   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 70/120
	I0722 00:44:08.061345   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 71/120
	I0722 00:44:09.062651   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 72/120
	I0722 00:44:10.064239   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 73/120
	I0722 00:44:11.066701   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 74/120
	I0722 00:44:12.068008   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 75/120
	I0722 00:44:13.069626   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 76/120
	I0722 00:44:14.072111   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 77/120
	I0722 00:44:15.073669   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 78/120
	I0722 00:44:16.075039   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 79/120
	I0722 00:44:17.077190   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 80/120
	I0722 00:44:18.078642   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 81/120
	I0722 00:44:19.081280   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 82/120
	I0722 00:44:20.082876   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 83/120
	I0722 00:44:21.084218   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 84/120
	I0722 00:44:22.086564   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 85/120
	I0722 00:44:23.088459   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 86/120
	I0722 00:44:24.089866   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 87/120
	I0722 00:44:25.091206   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 88/120
	I0722 00:44:26.092927   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 89/120
	I0722 00:44:27.095371   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 90/120
	I0722 00:44:28.097080   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 91/120
	I0722 00:44:29.099489   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 92/120
	I0722 00:44:30.100775   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 93/120
	I0722 00:44:31.102113   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 94/120
	I0722 00:44:32.104127   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 95/120
	I0722 00:44:33.105645   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 96/120
	I0722 00:44:34.106911   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 97/120
	I0722 00:44:35.108763   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 98/120
	I0722 00:44:36.110408   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 99/120
	I0722 00:44:37.112362   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 100/120
	I0722 00:44:38.113755   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 101/120
	I0722 00:44:39.114856   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 102/120
	I0722 00:44:40.117280   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 103/120
	I0722 00:44:41.118571   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 104/120
	I0722 00:44:42.120538   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 105/120
	I0722 00:44:43.122185   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 106/120
	I0722 00:44:44.123778   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 107/120
	I0722 00:44:45.125175   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 108/120
	I0722 00:44:46.126592   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 109/120
	I0722 00:44:47.128836   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 110/120
	I0722 00:44:48.130432   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 111/120
	I0722 00:44:49.131908   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 112/120
	I0722 00:44:50.133269   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 113/120
	I0722 00:44:51.134589   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 114/120
	I0722 00:44:52.136659   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 115/120
	I0722 00:44:53.138066   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 116/120
	I0722 00:44:54.139359   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 117/120
	I0722 00:44:55.140608   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 118/120
	I0722 00:44:56.141854   69606 main.go:141] libmachine: (no-preload-945581) Waiting for machine to stop 119/120
	I0722 00:44:57.143320   69606 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 00:44:57.143368   69606 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 00:44:57.145088   69606 out.go:177] 
	W0722 00:44:57.146420   69606 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 00:44:57.146438   69606 out.go:239] * 
	* 
	W0722 00:44:57.148917   69606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:44:57.150032   69606 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-945581 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
E0722 00:44:57.448433   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581: exit status 3 (18.65569855s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:15.806929   70939 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E0722 00:45:15.806949   70939 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-945581" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-360389 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-360389 --alsologtostderr -v=3: exit status 82 (2m0.515775456s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-360389"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:44:39.974220   70843 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:44:39.974337   70843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:44:39.974348   70843 out.go:304] Setting ErrFile to fd 2...
	I0722 00:44:39.974352   70843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:44:39.974563   70843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:44:39.974850   70843 out.go:298] Setting JSON to false
	I0722 00:44:39.974938   70843 mustload.go:65] Loading cluster: embed-certs-360389
	I0722 00:44:39.975289   70843 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:44:39.975356   70843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:44:39.975527   70843 mustload.go:65] Loading cluster: embed-certs-360389
	I0722 00:44:39.975659   70843 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:44:39.975702   70843 stop.go:39] StopHost: embed-certs-360389
	I0722 00:44:39.976125   70843 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:44:39.976169   70843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:44:39.990781   70843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0722 00:44:39.991237   70843 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:44:39.991799   70843 main.go:141] libmachine: Using API Version  1
	I0722 00:44:39.991820   70843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:44:39.992192   70843 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:44:39.994328   70843 out.go:177] * Stopping node "embed-certs-360389"  ...
	I0722 00:44:39.995382   70843 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 00:44:39.995407   70843 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:44:39.995617   70843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 00:44:39.995642   70843 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:44:39.998569   70843 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:44:39.998995   70843 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:43:46 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:44:39.999028   70843 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:44:39.999190   70843 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:44:39.999376   70843 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:44:39.999547   70843 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:44:39.999705   70843 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:44:40.099222   70843 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0722 00:44:40.171233   70843 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0722 00:44:40.248213   70843 main.go:141] libmachine: Stopping "embed-certs-360389"...
	I0722 00:44:40.248242   70843 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:44:40.249851   70843 main.go:141] libmachine: (embed-certs-360389) Calling .Stop
	I0722 00:44:40.253225   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 0/120
	I0722 00:44:41.254635   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 1/120
	I0722 00:44:42.256108   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 2/120
	I0722 00:44:43.257662   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 3/120
	I0722 00:44:44.259061   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 4/120
	I0722 00:44:45.260989   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 5/120
	I0722 00:44:46.262462   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 6/120
	I0722 00:44:47.263883   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 7/120
	I0722 00:44:48.265330   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 8/120
	I0722 00:44:49.266722   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 9/120
	I0722 00:44:50.268105   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 10/120
	I0722 00:44:51.269484   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 11/120
	I0722 00:44:52.270818   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 12/120
	I0722 00:44:53.272412   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 13/120
	I0722 00:44:54.273694   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 14/120
	I0722 00:44:55.275738   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 15/120
	I0722 00:44:56.277054   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 16/120
	I0722 00:44:57.278057   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 17/120
	I0722 00:44:58.279247   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 18/120
	I0722 00:44:59.280867   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 19/120
	I0722 00:45:00.283094   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 20/120
	I0722 00:45:01.284747   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 21/120
	I0722 00:45:02.286721   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 22/120
	I0722 00:45:03.287986   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 23/120
	I0722 00:45:04.289285   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 24/120
	I0722 00:45:05.291342   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 25/120
	I0722 00:45:06.292567   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 26/120
	I0722 00:45:07.293899   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 27/120
	I0722 00:45:08.295156   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 28/120
	I0722 00:45:09.296934   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 29/120
	I0722 00:45:10.299221   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 30/120
	I0722 00:45:11.300465   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 31/120
	I0722 00:45:12.301814   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 32/120
	I0722 00:45:13.303592   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 33/120
	I0722 00:45:14.305028   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 34/120
	I0722 00:45:15.307006   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 35/120
	I0722 00:45:16.308416   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 36/120
	I0722 00:45:17.309961   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 37/120
	I0722 00:45:18.311488   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 38/120
	I0722 00:45:19.312951   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 39/120
	I0722 00:45:20.315408   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 40/120
	I0722 00:45:21.316805   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 41/120
	I0722 00:45:22.318230   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 42/120
	I0722 00:45:23.319685   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 43/120
	I0722 00:45:24.321381   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 44/120
	I0722 00:45:25.323468   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 45/120
	I0722 00:45:26.324882   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 46/120
	I0722 00:45:27.326424   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 47/120
	I0722 00:45:28.328306   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 48/120
	I0722 00:45:29.329942   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 49/120
	I0722 00:45:30.332172   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 50/120
	I0722 00:45:31.333721   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 51/120
	I0722 00:45:32.335297   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 52/120
	I0722 00:45:33.336611   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 53/120
	I0722 00:45:34.338151   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 54/120
	I0722 00:45:35.339960   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 55/120
	I0722 00:45:36.341279   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 56/120
	I0722 00:45:37.342655   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 57/120
	I0722 00:45:38.343969   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 58/120
	I0722 00:45:39.345412   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 59/120
	I0722 00:45:40.347501   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 60/120
	I0722 00:45:41.348898   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 61/120
	I0722 00:45:42.350556   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 62/120
	I0722 00:45:43.352039   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 63/120
	I0722 00:45:44.353397   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 64/120
	I0722 00:45:45.355741   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 65/120
	I0722 00:45:46.357355   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 66/120
	I0722 00:45:47.358799   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 67/120
	I0722 00:45:48.360387   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 68/120
	I0722 00:45:49.362006   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 69/120
	I0722 00:45:50.364343   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 70/120
	I0722 00:45:51.365898   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 71/120
	I0722 00:45:52.367309   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 72/120
	I0722 00:45:53.368760   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 73/120
	I0722 00:45:54.370231   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 74/120
	I0722 00:45:55.372078   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 75/120
	I0722 00:45:56.373895   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 76/120
	I0722 00:45:57.375471   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 77/120
	I0722 00:45:58.376797   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 78/120
	I0722 00:45:59.378385   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 79/120
	I0722 00:46:00.380785   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 80/120
	I0722 00:46:01.382082   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 81/120
	I0722 00:46:02.383632   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 82/120
	I0722 00:46:03.384900   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 83/120
	I0722 00:46:04.386349   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 84/120
	I0722 00:46:05.388347   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 85/120
	I0722 00:46:06.389721   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 86/120
	I0722 00:46:07.391209   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 87/120
	I0722 00:46:08.392488   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 88/120
	I0722 00:46:09.393915   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 89/120
	I0722 00:46:10.396136   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 90/120
	I0722 00:46:11.397631   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 91/120
	I0722 00:46:12.398982   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 92/120
	I0722 00:46:13.400391   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 93/120
	I0722 00:46:14.401945   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 94/120
	I0722 00:46:15.403762   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 95/120
	I0722 00:46:16.405072   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 96/120
	I0722 00:46:17.406666   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 97/120
	I0722 00:46:18.407883   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 98/120
	I0722 00:46:19.409420   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 99/120
	I0722 00:46:20.411608   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 100/120
	I0722 00:46:21.413050   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 101/120
	I0722 00:46:22.414559   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 102/120
	I0722 00:46:23.415960   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 103/120
	I0722 00:46:24.417303   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 104/120
	I0722 00:46:25.419428   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 105/120
	I0722 00:46:26.420846   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 106/120
	I0722 00:46:27.422549   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 107/120
	I0722 00:46:28.424210   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 108/120
	I0722 00:46:29.425710   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 109/120
	I0722 00:46:30.427893   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 110/120
	I0722 00:46:31.429367   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 111/120
	I0722 00:46:32.430799   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 112/120
	I0722 00:46:33.432297   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 113/120
	I0722 00:46:34.433806   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 114/120
	I0722 00:46:35.435939   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 115/120
	I0722 00:46:36.437402   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 116/120
	I0722 00:46:37.439238   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 117/120
	I0722 00:46:38.440606   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 118/120
	I0722 00:46:39.442115   70843 main.go:141] libmachine: (embed-certs-360389) Waiting for machine to stop 119/120
	I0722 00:46:40.443334   70843 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0722 00:46:40.443402   70843 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0722 00:46:40.445153   70843 out.go:177] 
	W0722 00:46:40.446464   70843 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0722 00:46:40.446477   70843 out.go:239] * 
	* 
	W0722 00:46:40.449033   70843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:46:40.450194   70843 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-360389 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
E0722 00:46:51.725472   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389: exit status 3 (18.522955524s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:46:58.974940   71865 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E0722 00:46:58.974966   71865 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-360389" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
E0722 00:45:00.008751   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905: exit status 3 (3.167512723s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:02.334882   70969 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host
	E0722 00:45:02.334902   70969 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-214905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-214905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151766237s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-214905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905: exit status 3 (3.063745334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:11.550991   71181 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host
	E0722 00:45:11.551014   71181 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-214905" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-366657 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-366657 create -f testdata/busybox.yaml: exit status 1 (41.048637ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-366657" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-366657 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 6 (209.890305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:02.275632   71057 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-366657" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 6 (218.020765ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:02.491994   71087 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-366657" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (77.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-366657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0722 00:45:05.129829   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:45:07.024735   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-366657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m16.920694781s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-366657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-366657 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-366657 describe deploy/metrics-server -n kube-system: exit status 1 (42.095174ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-366657" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-366657 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 6 (211.757217ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:46:19.667956   71637 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-366657" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (77.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581: exit status 3 (3.167959172s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:18.974953   71284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E0722 00:45:18.974972   71284 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-945581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-945581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15209157s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-945581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
E0722 00:45:27.505155   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581: exit status 3 (3.063888245s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:45:28.191064   71365 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E0722 00:45:28.191089   71365 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-945581" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (738.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0722 00:46:31.244787   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:31.994383   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:46:36.035866   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m14.826687244s)

                                                
                                                
-- stdout --
	* [old-k8s-version-366657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-366657" primary control-plane node in "old-k8s-version-366657" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:46:23.177785   71766 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:46:23.177943   71766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:46:23.177953   71766 out.go:304] Setting ErrFile to fd 2...
	I0722 00:46:23.177960   71766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:46:23.178138   71766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:46:23.178694   71766 out.go:298] Setting JSON to false
	I0722 00:46:23.179582   71766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5327,"bootTime":1721603856,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:46:23.179642   71766 start.go:139] virtualization: kvm guest
	I0722 00:46:23.181742   71766 out.go:177] * [old-k8s-version-366657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:46:23.183239   71766 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:46:23.183283   71766 notify.go:220] Checking for updates...
	I0722 00:46:23.185419   71766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:46:23.186634   71766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:46:23.187753   71766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:46:23.188742   71766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:46:23.189767   71766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:46:23.191234   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:46:23.191792   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:46:23.191849   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:46:23.206469   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0722 00:46:23.206886   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:46:23.207383   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:46:23.207404   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:46:23.207727   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:46:23.207871   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:46:23.209464   71766 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 00:46:23.210491   71766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:46:23.210796   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:46:23.210851   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:46:23.225393   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0722 00:46:23.225752   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:46:23.226258   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:46:23.226285   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:46:23.226623   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:46:23.226782   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:46:23.260662   71766 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:46:23.261783   71766 start.go:297] selected driver: kvm2
	I0722 00:46:23.261805   71766 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:46:23.261925   71766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:46:23.262577   71766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:46:23.262668   71766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:46:23.277875   71766 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:46:23.278246   71766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:46:23.278276   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:46:23.278286   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:46:23.278336   71766 start.go:340] cluster config:
	{Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:46:23.278463   71766 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:46:23.280187   71766 out.go:177] * Starting "old-k8s-version-366657" primary control-plane node in "old-k8s-version-366657" cluster
	I0722 00:46:23.281481   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:46:23.281531   71766 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0722 00:46:23.281544   71766 cache.go:56] Caching tarball of preloaded images
	I0722 00:46:23.281616   71766 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:46:23.281628   71766 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0722 00:46:23.281734   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:46:23.281926   71766 start.go:360] acquireMachinesLock for old-k8s-version-366657: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	* 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	* 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-366657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (243.124668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25: (1.596530094s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.854078448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721609919854053021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=191d8dac-b980-43d1-b29e-8d3ddf4f1e81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.854509528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9471545f-babd-48ef-bb5c-aa4db872f29f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.854571205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9471545f-babd-48ef-bb5c-aa4db872f29f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.854610374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9471545f-babd-48ef-bb5c-aa4db872f29f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.889074344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6f1dd30-8afd-4523-a4e6-0c8e8875fb96 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.889185266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6f1dd30-8afd-4523-a4e6-0c8e8875fb96 name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.890181067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=689c10e0-5b18-4251-bf2e-4319456db239 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.890610673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721609919890571214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=689c10e0-5b18-4251-bf2e-4319456db239 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.891279862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39c22ce0-499a-4783-a797-db0dbcc22926 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.891334519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39c22ce0-499a-4783-a797-db0dbcc22926 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.891366728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=39c22ce0-499a-4783-a797-db0dbcc22926 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.922265023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4824b695-2c8b-472b-8764-f17d2433e22a name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.922348521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4824b695-2c8b-472b-8764-f17d2433e22a name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.923547012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d0c1014-53ab-4ccf-8386-e1abde7809a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.923952509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721609919923930203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d0c1014-53ab-4ccf-8386-e1abde7809a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.924549575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50dfab81-3c99-4890-b02b-15e5d9976cdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.924607329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50dfab81-3c99-4890-b02b-15e5d9976cdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.924643779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=50dfab81-3c99-4890-b02b-15e5d9976cdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.954171279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2fe87ac-bcf8-4432-8a7d-b474160e580f name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.954280919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2fe87ac-bcf8-4432-8a7d-b474160e580f name=/runtime.v1.RuntimeService/Version
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.955087419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f6058e4-c57b-4448-83f8-c81219ea6113 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.955515385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721609919955492516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f6058e4-c57b-4448-83f8-c81219ea6113 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.955966450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9e8a02f-697d-41ac-9948-b25b7691f394 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.956023894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9e8a02f-697d-41ac-9948-b25b7691f394 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 00:58:39 old-k8s-version-366657 crio[629]: time="2024-07-22 00:58:39.956058335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9e8a02f-697d-41ac-9948-b25b7691f394 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051104] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496567] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.796830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544248] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.276300] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.064156] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073267] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.169185] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.171264] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.282291] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +6.446308] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.069249] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.917900] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[ +11.851684] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 00:54] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul22 00:56] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.066214] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:58:40 up 8 min,  0 users,  load average: 0.00, 0.05, 0.03
	Linux old-k8s-version-366657 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000650738, 0x9, 0x9, 0x4f04880, 0xc00033a000, 0x0, 0xc000000000, 0xc00042d1a0, 0xc000700560)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000650700, 0xc000919dd0, 0x1, 0x0, 0x0)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000872a80)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: goroutine 155 [select]:
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0001088c0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00033a2a0, 0x0, 0x0)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000872a80)
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 22 00:58:37 old-k8s-version-366657 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 22 00:58:38 old-k8s-version-366657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 22 00:58:38 old-k8s-version-366657 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 00:58:38 old-k8s-version-366657 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 00:58:38 old-k8s-version-366657 kubelet[5586]: I0722 00:58:38.529670    5586 server.go:416] Version: v1.20.0
	Jul 22 00:58:38 old-k8s-version-366657 kubelet[5586]: I0722 00:58:38.529950    5586 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 00:58:38 old-k8s-version-366657 kubelet[5586]: I0722 00:58:38.531745    5586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 00:58:38 old-k8s-version-366657 kubelet[5586]: I0722 00:58:38.532995    5586 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 22 00:58:38 old-k8s-version-366657 kubelet[5586]: W0722 00:58:38.533171    5586 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (226.590064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-366657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (738.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389: exit status 3 (3.167622455s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:47:02.142962   71943 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E0722 00:47:02.142982   71943 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-360389 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-360389 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152364055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-360389 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389: exit status 3 (3.063548769s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 00:47:11.359022   72023 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E0722 00:47:11.359041   72023 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-360389" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-360389 -n embed-certs-360389
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:04:25.672883943 +0000 UTC m=+5984.381482649
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-360389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-360389 logs -n 25: (2.065871349s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.243771936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610267243740568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0053c8a9-0b2e-479a-8353-859898df5154 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.244483603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d88e63a2-454d-43c4-a031-702f0de8526f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.244536991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d88e63a2-454d-43c4-a031-702f0de8526f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.244746286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d88e63a2-454d-43c4-a031-702f0de8526f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.281659226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daa1cd6a-0f81-41e7-abfa-55f30bf2b77a name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.281745131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daa1cd6a-0f81-41e7-abfa-55f30bf2b77a name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.282748758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6572e07-b4a1-4e27-ae7f-5ce3f64c46ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.283133055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610267283109015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6572e07-b4a1-4e27-ae7f-5ce3f64c46ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.283666456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ce3f818-bf84-4bbe-bde3-d97990fe6104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.283721638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ce3f818-bf84-4bbe-bde3-d97990fe6104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.283932982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ce3f818-bf84-4bbe-bde3-d97990fe6104 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.320054020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26ebf41d-fb71-4e2c-9400-92feebfa1671 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.320137561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26ebf41d-fb71-4e2c-9400-92feebfa1671 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.321223213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cb5644d-2bc6-4ace-99b0-9102236f717a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.321840218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610267321811800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cb5644d-2bc6-4ace-99b0-9102236f717a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.322325975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c085c33f-8c1b-4969-87ba-14e667e6f98f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.322461389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c085c33f-8c1b-4969-87ba-14e667e6f98f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.322777602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c085c33f-8c1b-4969-87ba-14e667e6f98f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.355472922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=100fed09-7fe3-442a-8aaa-e6f678e50425 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.355890192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=100fed09-7fe3-442a-8aaa-e6f678e50425 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.357124095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41ba11c7-d91d-4bc9-97fc-c8a082783c29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.357590402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610267357563152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41ba11c7-d91d-4bc9-97fc-c8a082783c29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.358153088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c56c333b-996a-446d-a4ab-282c8aeccef9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.358216630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c56c333b-996a-446d-a4ab-282c8aeccef9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:27 embed-certs-360389 crio[721]: time="2024-07-22 01:04:27.358445786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c56c333b-996a-446d-a4ab-282c8aeccef9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8e399257c6a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   7eb1781846376       storage-provisioner
	f229c6081d935       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7b1d393663db9       busybox
	93b990e487bfd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   eda7d19c94d09       coredns-7db6d8ff4d-7mzsv
	fc4ac4f1206a6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   842461323b73b       kube-proxy-8j7bx
	8efc9587f83d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   7eb1781846376       storage-provisioner
	a6a52deb00960       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   66e3a11ef4d84       etcd-embed-certs-360389
	193fb390e4d47       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   a3b49133ad1b8       kube-controller-manager-embed-certs-360389
	62e46b9a1718a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   35d2b53feb9b2       kube-apiserver-embed-certs-360389
	deb1a27ba8547       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   1ebf78c891885       kube-scheduler-embed-certs-360389
	
	
	==> coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58239 - 55400 "HINFO IN 7183721124252281798.7244563882075223873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013611028s
	
	
	==> describe nodes <==
	Name:               embed-certs-360389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-360389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=embed-certs-360389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_44_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:44:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-360389
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:04:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:01:41 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:01:41 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:01:41 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:01:41 +0000   Mon, 22 Jul 2024 00:51:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.32
	  Hostname:    embed-certs-360389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec29f684ac484cc89954b52d4bb590db
	  System UUID:                ec29f684-ac48-4cc8-9954-b52d4bb590db
	  Boot ID:                    2fdd82bf-1aa7-46c3-ac7a-f2195fb3f2aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-7mzsv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-embed-certs-360389                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-embed-certs-360389             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-embed-certs-360389    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-8j7bx                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-embed-certs-360389             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-k68zp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m                kubelet          Node embed-certs-360389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node embed-certs-360389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-360389 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node embed-certs-360389 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-360389 event: Registered Node embed-certs-360389 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-360389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-360389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-360389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-360389 event: Registered Node embed-certs-360389 in Controller
	
	
	==> dmesg <==
	[Jul22 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060293] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038357] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858457] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.796834] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.519045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.288135] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.063121] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066416] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.225545] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.129536] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.289587] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.388152] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.075987] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.928566] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +4.642618] kauditd_printk_skb: 97 callbacks suppressed
	[Jul22 00:51] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +3.208787] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.917732] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] <==
	{"level":"info","ts":"2024-07-22T00:50:56.314821Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"af722703d3b6d364","initial-advertise-peer-urls":["https://192.168.72.32:2380"],"listen-peer-urls":["https://192.168.72.32:2380"],"advertise-client-urls":["https://192.168.72.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:50:56.314869Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:50:56.315012Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.32:2380"}
	{"level":"info","ts":"2024-07-22T00:50:56.315037Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.32:2380"}
	{"level":"info","ts":"2024-07-22T00:50:57.367543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T00:50:57.367591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:50:57.367622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgPreVoteResp from af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2024-07-22T00:50:57.367634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgVoteResp from af722703d3b6d364 at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af722703d3b6d364 elected leader af722703d3b6d364 at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.369939Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"af722703d3b6d364","local-member-attributes":"{Name:embed-certs-360389 ClientURLs:[https://192.168.72.32:2379]}","request-path":"/0/members/af722703d3b6d364/attributes","cluster-id":"69693fe7a610a475","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:50:57.370071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:50:57.370123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:50:57.371451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:50:57.371476Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:50:57.371919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.32:2379"}
	{"level":"info","ts":"2024-07-22T00:50:57.374326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-22T00:51:15.636064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.165041ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15232459096717954203 > lease_revoke:<id:536490d7eb8a8b71>","response":"size:28"}
	{"level":"info","ts":"2024-07-22T00:51:15.636336Z","caller":"traceutil/trace.go:171","msg":"trace[551364333] linearizableReadLoop","detail":"{readStateIndex:595; appliedIndex:594; }","duration":"265.17263ms","start":"2024-07-22T00:51:15.371135Z","end":"2024-07-22T00:51:15.636308Z","steps":["trace[551364333] 'read index received'  (duration: 9.328515ms)","trace[551364333] 'applied index is now lower than readState.Index'  (duration: 255.842329ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:51:15.636653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.480227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-k68zp\" ","response":"range_response_count:1 size:4281"}
	{"level":"info","ts":"2024-07-22T00:51:15.63672Z","caller":"traceutil/trace.go:171","msg":"trace[2032112048] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-k68zp; range_end:; response_count:1; response_revision:562; }","duration":"265.604616ms","start":"2024-07-22T00:51:15.371104Z","end":"2024-07-22T00:51:15.636709Z","steps":["trace[2032112048] 'agreement among raft nodes before linearized reading'  (duration: 265.37298ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:00:57.400834Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":806}
	{"level":"info","ts":"2024-07-22T01:00:57.411776Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":806,"took":"10.094727ms","hash":2900274465,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2756608,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-22T01:00:57.411929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2900274465,"revision":806,"compact-revision":-1}
	
	
	==> kernel <==
	 01:04:27 up 13 min,  0 users,  load average: 1.83, 1.80, 1.21
	Linux embed-certs-360389 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] <==
	I0722 00:58:59.704053       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:00:58.705594       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:00:58.705727       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 01:00:59.706254       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:00:59.706535       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:00:59.706624       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:00:59.706469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:00:59.706755       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:00:59.708714       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:01:59.707696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:01:59.708004       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:01:59.708047       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:01:59.709282       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:01:59.709391       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:01:59.709411       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:03:59.709240       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:03:59.709324       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:03:59.709333       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:03:59.710483       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:03:59.710629       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:03:59.710679       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] <==
	I0722 00:58:42.152314       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 00:59:11.705563       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 00:59:12.161771       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 00:59:41.710970       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 00:59:42.169776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:11.716540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:00:12.178658       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:41.721003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:00:42.187979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:11.726228       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:01:12.196060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:41.731661       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:01:42.202838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:02:08.902584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="294.629µs"
	E0722 01:02:11.738174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:02:12.210726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:02:21.894389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="119.212µs"
	E0722 01:02:41.743603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:02:42.218431       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:11.748737       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:03:12.227494       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:41.754385       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:03:42.237094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:04:11.760136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:04:12.245216       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] <==
	I0722 00:50:59.546436       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:50:59.559114       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.32"]
	I0722 00:50:59.624579       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:50:59.624701       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:50:59.624741       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:50:59.631681       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:50:59.632069       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:50:59.632149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:50:59.634021       1 config.go:192] "Starting service config controller"
	I0722 00:50:59.634111       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:50:59.634210       1 config.go:319] "Starting node config controller"
	I0722 00:50:59.634263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:50:59.634499       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:50:59.634528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:50:59.735692       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:50:59.735818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:50:59.735846       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] <==
	I0722 00:50:56.376077       1 serving.go:380] Generated self-signed cert in-memory
	W0722 00:50:58.674942       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 00:50:58.675099       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:50:58.675184       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 00:50:58.675209       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 00:50:58.710051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:50:58.710098       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:50:58.714206       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:50:58.715585       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:50:58.715623       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:50:58.715644       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:50:58.816499       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:01:57 embed-certs-360389 kubelet[930]: E0722 01:01:57.897512     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 01:01:57 embed-certs-360389 kubelet[930]: E0722 01:01:57.897629     930 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 01:01:57 embed-certs-360389 kubelet[930]: E0722 01:01:57.897990     930 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz4cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-k68zp_kube-system(9d851e83-b647-4e9e-a098-45c8b9d10323): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 22 01:01:57 embed-certs-360389 kubelet[930]: E0722 01:01:57.898081     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:02:08 embed-certs-360389 kubelet[930]: E0722 01:02:08.884379     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:02:21 embed-certs-360389 kubelet[930]: E0722 01:02:21.882078     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:02:36 embed-certs-360389 kubelet[930]: E0722 01:02:36.881417     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:02:47 embed-certs-360389 kubelet[930]: E0722 01:02:47.881416     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:02:54 embed-certs-360389 kubelet[930]: E0722 01:02:54.898655     930 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:02:54 embed-certs-360389 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:02:54 embed-certs-360389 kubelet[930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:02:54 embed-certs-360389 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:02:54 embed-certs-360389 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:03:02 embed-certs-360389 kubelet[930]: E0722 01:03:02.881664     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:03:16 embed-certs-360389 kubelet[930]: E0722 01:03:16.882303     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:03:31 embed-certs-360389 kubelet[930]: E0722 01:03:31.882123     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:03:45 embed-certs-360389 kubelet[930]: E0722 01:03:45.881729     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:03:54 embed-certs-360389 kubelet[930]: E0722 01:03:54.897224     930 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:03:54 embed-certs-360389 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:03:54 embed-certs-360389 kubelet[930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:03:54 embed-certs-360389 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:03:54 embed-certs-360389 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:04:00 embed-certs-360389 kubelet[930]: E0722 01:04:00.881923     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:04:13 embed-certs-360389 kubelet[930]: E0722 01:04:13.881637     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:04:24 embed-certs-360389 kubelet[930]: E0722 01:04:24.883333     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	
	
	==> storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] <==
	I0722 00:50:59.438639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0722 00:51:29.443418       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] <==
	I0722 00:51:30.210014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:51:30.225628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:51:30.225806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:51:47.626322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:51:47.627566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4!
	I0722 00:51:47.630660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aad4fa1f-009e-4076-a42a-18ba9d82c0b7", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4 became leader
	I0722 00:51:47.728853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-360389 -n embed-certs-360389
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-360389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k68zp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp: exit status 1 (65.72404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k68zp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0722 00:55:51.033250   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:56:10.764830   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-945581 -n no-preload-945581
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:04:35.30835491 +0000 UTC m=+5994.016953622
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-945581 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-945581 logs -n 25: (2.121725083s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.903967419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=173cdfce-0ead-4483-9683-1d0492351976 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.905063346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1b36855-f08c-4ddc-9efb-571cb5846f36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.905396795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610276905376746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1b36855-f08c-4ddc-9efb-571cb5846f36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.906057331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d03cf4f8-8f22-488a-b525-8cfdaf0b1173 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.906115097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d03cf4f8-8f22-488a-b525-8cfdaf0b1173 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.906346201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d03cf4f8-8f22-488a-b525-8cfdaf0b1173 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.944148689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cf3441d-a74e-456b-864d-dcc363ff6c4d name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.944242994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cf3441d-a74e-456b-864d-dcc363ff6c4d name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.945757615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0905d6b0-6a2b-4ba0-a1a5-ddb942545407 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.946113256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610276946085417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0905d6b0-6a2b-4ba0-a1a5-ddb942545407 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.946893687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8793c2c1-d78d-4325-8190-4d2f1b3381ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.946945208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8793c2c1-d78d-4325-8190-4d2f1b3381ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.947150056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8793c2c1-d78d-4325-8190-4d2f1b3381ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.979815348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a270db6-500b-4aa3-83a0-eaa09beb93a8 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.979900777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a270db6-500b-4aa3-83a0-eaa09beb93a8 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.981011390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4e9c204-03ff-477c-965c-aa065861ab8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.981435878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610276981408356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4e9c204-03ff-477c-965c-aa065861ab8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.981904323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5364f3cd-c827-4326-8e53-735288ce2c8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.981957508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5364f3cd-c827-4326-8e53-735288ce2c8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:36 no-preload-945581 crio[715]: time="2024-07-22 01:04:36.982178784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5364f3cd-c827-4326-8e53-735288ce2c8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:37 no-preload-945581 crio[715]: time="2024-07-22 01:04:37.002822265Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=becf304f-205b-451f-8c00-3dc9dfd821a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 22 01:04:37 no-preload-945581 crio[715]: time="2024-07-22 01:04:37.003151605Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-68wll,Uid:0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609727712893727,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.497455545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-9j27w,Uid:6979f6f9-75ac-49d9-adaf-71524576aad3,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609727694781471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.485946195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc4772de1eaf53303556cdaa286523415725fefbb827371fc5f9043736520281,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-l858z,Uid:0f17da27-a5bf-46ea-bbb8-00ee2f308542,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609726520748152,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-l858z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f17da27-a5bf-46ea-bbb8-00ee2f308542,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.205271795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0448fcfd-604d-47b4-822e-bc0d117d3b2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609726412490680,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-22T00:55:26.103803143Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&PodSandboxMetadata{Name:kube-proxy-g56gz,Uid:81c84dcd-74b2-44b3-b25e-4074cfe2881d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609725298016138,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:24.982519062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-945581,Uid:78a3bc5c3e001457a5031a7022a013a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714313237612,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 78a3bc5c3e001457a5031a7022a013a4,kubernetes.io/config.seen: 2024-07-22T00:55:13.861992598Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&PodSandboxMetadata{Name:kube-controller-m
anager-no-preload-945581,Uid:ffbf4901cbdfd3f44f04f34ad80ba5ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714311462747,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ffbf4901cbdfd3f44f04f34ad80ba5ce,kubernetes.io/config.seen: 2024-07-22T00:55:13.861990556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-945581,Uid:d933df5461e83068804e0d24b2eeaa8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714309937420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.251:2379,kubernetes.io/config.hash: d933df5461e83068804e0d24b2eeaa8b,kubernetes.io/config.seen: 2024-07-22T00:55:13.861984297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-945581,Uid:66a4fbf4e1b85a82bdfb3c5a3c11917d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721609714299302531,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.251:844
3,kubernetes.io/config.hash: 66a4fbf4e1b85a82bdfb3c5a3c11917d,kubernetes.io/config.seen: 2024-07-22T00:55:13.861988883Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-945581,Uid:66a4fbf4e1b85a82bdfb3c5a3c11917d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721609424738657192,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.251:8443,kubernetes.io/config.hash: 66a4fbf4e1b85a82bdfb3c5a3c11917d,kubernetes.io/config.seen: 2024-07-22T00:50:24.253368689Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=becf304f-205b-451f-8c00-3dc9dfd821a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 22 01:04:37 no-preload-945581 crio[715]: time="2024-07-22 01:04:37.004102479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b2dee43-c59a-432a-8ecc-053acf17af24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:37 no-preload-945581 crio[715]: time="2024-07-22 01:04:37.004170226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b2dee43-c59a-432a-8ecc-053acf17af24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:04:37 no-preload-945581 crio[715]: time="2024-07-22 01:04:37.004729560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b2dee43-c59a-432a-8ecc-053acf17af24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddb5673ebc910       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1ec0525f0da37       coredns-5cfdc65f69-68wll
	c15b7cf4a9c99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   39220f03453a9       coredns-5cfdc65f69-9j27w
	901b26fcd1ca9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   1b16938ff7bcd       storage-provisioner
	dbe524b3dbde3       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   2b43b946ec07a       kube-proxy-g56gz
	af839eb6670b9       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   d7a60351cd728       etcd-no-preload-945581
	945ddd91e654d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   a751729723ef9       kube-scheduler-no-preload-945581
	4e13520bd3680       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   e4bddabdca855       kube-controller-manager-no-preload-945581
	e1ee8c2526929       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   41a74b5018194       kube-apiserver-no-preload-945581
	d165172b79f19       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   f6f719f80db34       kube-apiserver-no-preload-945581
	
	
	==> coredns [c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-945581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-945581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=no-preload-945581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:55:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-945581
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:04:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:00:35 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:00:35 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:00:35 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:00:35 +0000   Mon, 22 Jul 2024 00:55:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    no-preload-945581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a82bdef081e54ecbb38e19ac2a58d2df
	  System UUID:                a82bdef0-81e5-4ecb-b38e-19ac2a58d2df
	  Boot ID:                    2b3f0c55-5d35-4493-bb2f-e403074cac36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-68wll                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-5cfdc65f69-9j27w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-945581                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-945581             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-945581    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-g56gz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-scheduler-no-preload-945581             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-l858z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node no-preload-945581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m17s (x2 over 9m18s)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s (x2 over 9m18s)  kubelet          Node no-preload-945581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s (x2 over 9m18s)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m13s                  node-controller  Node no-preload-945581 event: Registered Node no-preload-945581 in Controller
	
	
	==> dmesg <==
	[  +0.050401] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038106] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.465426] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.710945] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jul22 00:50] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.524360] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.056029] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062446] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.167060] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.161401] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.280392] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[ +14.231418] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +0.059555] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.533069] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +5.691232] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.688230] kauditd_printk_skb: 86 callbacks suppressed
	[Jul22 00:55] systemd-fstab-generator[2909]: Ignoring "noauto" option for root device
	[  +0.063959] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.483086] systemd-fstab-generator[3230]: Ignoring "noauto" option for root device
	[  +0.075754] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.412438] systemd-fstab-generator[3345]: Ignoring "noauto" option for root device
	[  +0.095544] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.885626] kauditd_printk_skb: 90 callbacks suppressed
	
	
	==> etcd [af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba] <==
	{"level":"info","ts":"2024-07-22T00:55:15.117704Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T00:55:15.120806Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"439bb489ce44e0e1","initial-advertise-peer-urls":["https://192.168.50.251:2380"],"listen-peer-urls":["https://192.168.50.251:2380"],"advertise-client-urls":["https://192.168.50.251:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.251:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:55:15.120846Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:55:15.119712Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2024-07-22T00:55:15.12089Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2024-07-22T00:55:15.45663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T00:55:15.456692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T00:55:15.456722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgPreVoteResp from 439bb489ce44e0e1 at term 1"}
	{"level":"info","ts":"2024-07-22T00:55:15.456737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.456742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgVoteResp from 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.45675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.456757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 439bb489ce44e0e1 elected leader 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.460793Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.461014Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"439bb489ce44e0e1","local-member-attributes":"{Name:no-preload-945581 ClientURLs:[https://192.168.50.251:2379]}","request-path":"/0/members/439bb489ce44e0e1/attributes","cluster-id":"dd9b68cf7bac6d9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:55:15.461177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:55:15.462002Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:55:15.46415Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:55:15.464188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd9b68cf7bac6d9","local-member-id":"439bb489ce44e0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.471658Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.471697Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.464451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:55:15.471725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:55:15.466306Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:55:15.472365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.251:2379"}
	{"level":"info","ts":"2024-07-22T00:55:15.474739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:04:37 up 14 min,  0 users,  load average: 0.18, 0.20, 0.12
	Linux no-preload-945581 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf] <==
	W0722 00:55:05.941018       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:05.953782       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.045891       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.388463       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.440713       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.501201       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.590870       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.711221       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.756924       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.030308       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.084339       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.114948       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.267976       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.280726       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.368626       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.429997       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.457955       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.520317       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.723906       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.726247       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.747542       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.812061       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.823486       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.889069       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.929539       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 01:00:18.158290       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:00:18.158451       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 01:00:18.159669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 01:00:18.159682       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:01:18.161059       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:01:18.161185       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 01:01:18.161076       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:01:18.161246       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 01:01:18.162422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 01:01:18.162492       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:03:18.163464       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:03:18.163680       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 01:03:18.163465       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:03:18.163750       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 01:03:18.164944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 01:03:18.165025       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6] <==
	I0722 00:59:25.065645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 00:59:25.115064       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 00:59:55.074210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 00:59:55.122810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:00:25.085268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:25.130330       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:00:35.979330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-945581"
	I0722 01:00:55.094061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:55.136356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:01:23.992908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="337.879µs"
	I0722 01:01:25.102120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:25.143002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:01:39.000302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="225.402µs"
	I0722 01:01:55.117519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:55.149016       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:02:25.126403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:02:25.155457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:02:55.136512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:02:55.164413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:03:25.145656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:25.170925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:03:55.155034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:55.178131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:04:25.169176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:04:25.185144       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 00:55:25.765206       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 00:55:25.777051       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.251"]
	E0722 00:55:25.777121       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 00:55:25.838662       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 00:55:25.838709       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:55:25.838774       1 server_linux.go:170] "Using iptables Proxier"
	I0722 00:55:25.843119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 00:55:25.843399       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 00:55:25.843424       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:55:25.845257       1 config.go:197] "Starting service config controller"
	I0722 00:55:25.845296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:55:25.845319       1 config.go:104] "Starting endpoint slice config controller"
	I0722 00:55:25.845324       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:55:25.854102       1 config.go:326] "Starting node config controller"
	I0722 00:55:25.854219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:55:25.945984       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:55:25.946068       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:55:25.954259       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8] <==
	W0722 00:55:17.179013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:55:17.179040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:17.179239       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:55:17.179270       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 00:55:17.180230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:55:17.180263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:17.181849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:17.181880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.018618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:55:18.018758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.050255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:55:18.050311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.062418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.062467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.198325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:55:18.198371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.231369       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:55:18.231466       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 00:55:18.264094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.264230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.267702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.267837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.345298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.345415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0722 00:55:20.262453       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:02:20 no-preload-945581 kubelet[3237]: E0722 01:02:20.033244    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:02:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:02:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:02:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:02:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:02:31 no-preload-945581 kubelet[3237]: E0722 01:02:31.975620    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:02:43 no-preload-945581 kubelet[3237]: E0722 01:02:43.975896    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:02:57 no-preload-945581 kubelet[3237]: E0722 01:02:57.975126    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:03:11 no-preload-945581 kubelet[3237]: E0722 01:03:11.974477    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:03:20 no-preload-945581 kubelet[3237]: E0722 01:03:20.034102    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:03:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:03:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:03:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:03:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:03:25 no-preload-945581 kubelet[3237]: E0722 01:03:25.975344    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:03:39 no-preload-945581 kubelet[3237]: E0722 01:03:39.975484    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:03:50 no-preload-945581 kubelet[3237]: E0722 01:03:50.973858    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:04:05 no-preload-945581 kubelet[3237]: E0722 01:04:05.975852    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:04:19 no-preload-945581 kubelet[3237]: E0722 01:04:19.975937    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:04:20 no-preload-945581 kubelet[3237]: E0722 01:04:20.032109    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:04:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:04:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:04:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:04:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:04:32 no-preload-945581 kubelet[3237]: E0722 01:04:32.974277    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	
	
	==> storage-provisioner [901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4] <==
	I0722 00:55:26.693365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:55:26.708482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:55:26.708633       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:55:26.719741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:55:26.720475       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a68e8e9-db17-44cc-b224-e2d6df163c4e", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c became leader
	I0722 00:55:26.720515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c!
	I0722 00:55:26.821291       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-945581 -n no-preload-945581
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-945581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-l858z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z: exit status 1 (63.773713ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-l858z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0722 00:57:54.282650   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0722 00:58:01.305513   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:05:30.282209198 +0000 UTC m=+6048.990807925
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-214905 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-214905 logs -n 25: (1.962459173s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.723173336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610331723115669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=774c3905-54ed-4b6c-befc-8d05251736ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.723740409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca82d909-a5d5-403e-9473-a95f018c4a59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.723795834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca82d909-a5d5-403e-9473-a95f018c4a59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.723984372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca82d909-a5d5-403e-9473-a95f018c4a59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.761288573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3449fb37-9e94-46ba-a45a-56abd54d4a84 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.761363401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3449fb37-9e94-46ba-a45a-56abd54d4a84 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.763341538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb9f164e-6a9a-4a43-bae6-4ce1856a8963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.763839952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610331763814294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb9f164e-6a9a-4a43-bae6-4ce1856a8963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.764576178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=420c6e2a-4a09-410a-9dcb-0fb3a7dd9040 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.764647291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=420c6e2a-4a09-410a-9dcb-0fb3a7dd9040 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.764828534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=420c6e2a-4a09-410a-9dcb-0fb3a7dd9040 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.800644753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0854e569-cb9a-4419-ab03-fe40ed10d1ef name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.800735364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0854e569-cb9a-4419-ab03-fe40ed10d1ef name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.802148560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33c03d8c-9559-4ea7-a385-65107e274659 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.802560840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610331802540674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33c03d8c-9559-4ea7-a385-65107e274659 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.803190680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4da3e167-62a4-4700-a2c7-9d43b6906392 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.803320078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4da3e167-62a4-4700-a2c7-9d43b6906392 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.804235441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4da3e167-62a4-4700-a2c7-9d43b6906392 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.838067894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9cc1acf-8c30-4c90-aaca-41b82d1c3c55 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.838179191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9cc1acf-8c30-4c90-aaca-41b82d1c3c55 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.839447030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d237f184-286b-4760-85ae-bfbfdb134880 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.839832465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610331839808316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d237f184-286b-4760-85ae-bfbfdb134880 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.840468577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57eed563-4d45-48f7-82e8-b52a25663561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.840521284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57eed563-4d45-48f7-82e8-b52a25663561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:05:31 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:05:31.840704911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57eed563-4d45-48f7-82e8-b52a25663561 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1e5f8f01efbd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e735873e2db9a       coredns-7db6d8ff4d-phh59
	d6f0c65dbc052       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0bfd753b52d38       coredns-7db6d8ff4d-4gv5m
	e30b46dc67de8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5e4532dd14faa       storage-provisioner
	5e711aaab81f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   a38669a4c258c       kube-proxy-th55d
	f1bbb980156be       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   23b1ce2239dba       kube-scheduler-default-k8s-diff-port-214905
	8932ca8211a6f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   e44bbad7456e9       etcd-default-k8s-diff-port-214905
	7a7d6a0fb3fa2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   d2efa81aad420       kube-controller-manager-default-k8s-diff-port-214905
	c698a466ba3cb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   82bf5b759b253       kube-apiserver-default-k8s-diff-port-214905
	
	
	==> coredns [a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-214905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-214905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=default-k8s-diff-port-214905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-214905
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:05:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:01:40 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:01:40 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:01:40 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:01:40 +0000   Mon, 22 Jul 2024 00:56:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.97
	  Hostname:    default-k8s-diff-port-214905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb045cc0de4f4a91b8f64fe03eb3641b
	  System UUID:                fb045cc0-de4f-4a91-b8f6-4fe03eb3641b
	  Boot ID:                    07d950fa-0a86-4eb0-81fa-058c796af7b9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4gv5m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-phh59                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-214905                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-214905             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-214905    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-th55d                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-214905             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-d4z4t                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m2s                   kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s (x2 over 9m18s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s (x2 over 9m18s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s (x2 over 9m18s)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m6s                   node-controller  Node default-k8s-diff-port-214905 event: Registered Node default-k8s-diff-port-214905 in Controller
	
	
	==> dmesg <==
	[  +0.039030] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.681094] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.776659] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.322742] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul22 00:51] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.065697] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056100] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.169027] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.134598] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.264476] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.259126] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +1.899721] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +0.059000] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.542682] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.552039] kauditd_printk_skb: 79 callbacks suppressed
	[ +24.368066] kauditd_printk_skb: 2 callbacks suppressed
	[Jul22 00:56] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.572760] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +4.911037] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.630436] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +13.364279] systemd-fstab-generator[4132]: Ignoring "noauto" option for root device
	[  +0.071880] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 00:57] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b] <==
	{"level":"info","ts":"2024-07-22T00:56:09.158258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d switched to configuration voters=(10729012413228237117)"}
	{"level":"info","ts":"2024-07-22T00:56:09.158359Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"864778fba5227de3","local-member-id":"94e51bf1f139c13d","added-peer-id":"94e51bf1f139c13d","added-peer-peer-urls":["https://192.168.61.97:2380"]}
	{"level":"info","ts":"2024-07-22T00:56:09.17392Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T00:56:09.179309Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"94e51bf1f139c13d","initial-advertise-peer-urls":["https://192.168.61.97:2380"],"listen-peer-urls":["https://192.168.61.97:2380"],"advertise-client-urls":["https://192.168.61.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T00:56:09.179392Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T00:56:09.174114Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.97:2380"}
	{"level":"info","ts":"2024-07-22T00:56:09.179456Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.97:2380"}
	{"level":"info","ts":"2024-07-22T00:56:09.364095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d received MsgPreVoteResp from 94e51bf1f139c13d at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d received MsgVoteResp from 94e51bf1f139c13d at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became leader at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94e51bf1f139c13d elected leader 94e51bf1f139c13d at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.368847Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.370069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94e51bf1f139c13d","local-member-attributes":"{Name:default-k8s-diff-port-214905 ClientURLs:[https://192.168.61.97:2379]}","request-path":"/0/members/94e51bf1f139c13d/attributes","cluster-id":"864778fba5227de3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:56:09.37019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:56:09.370492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:56:09.373102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"864778fba5227de3","local-member-id":"94e51bf1f139c13d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.373234Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.373289Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.374919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:56:09.376576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.97:2379"}
	{"level":"info","ts":"2024-07-22T00:56:09.37782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:56:09.379052Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:05:32 up 14 min,  0 users,  load average: 0.21, 0.14, 0.08
	Linux default-k8s-diff-port-214905 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228] <==
	I0722 00:59:29.046267       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:01:11.287453       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:01:11.287744       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 01:01:12.288842       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:01:12.288956       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:01:12.288965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:01:12.288842       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:01:12.289045       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:01:12.291105       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:02:12.289839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:02:12.290081       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:02:12.290123       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:02:12.292163       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:02:12.292207       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:02:12.292215       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:04:12.290982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:04:12.291166       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:04:12.291184       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:04:12.292367       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:04:12.292406       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:04:12.292414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3] <==
	I0722 00:59:57.576493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:27.028671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:00:27.583694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:00:57.034138       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:00:57.591081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:27.039520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:01:27.599568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:01:57.044793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:01:57.607551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:02:18.041448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="340.836µs"
	E0722 01:02:27.050621       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:02:27.617892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:02:32.041233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="115.152µs"
	E0722 01:02:57.055974       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:02:57.626284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:27.061685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:03:27.634686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:03:57.066571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:03:57.643455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:04:27.073585       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:04:27.651391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:04:57.078342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:04:57.659716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:05:27.084198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:05:27.669290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0] <==
	I0722 00:56:29.030644       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:56:29.081602       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.97"]
	I0722 00:56:29.247727       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:56:29.247772       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:56:29.247789       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:56:29.250099       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:56:29.250336       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:56:29.250545       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:56:29.253550       1 config.go:192] "Starting service config controller"
	I0722 00:56:29.253593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:56:29.253628       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:56:29.253644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:56:29.256450       1 config.go:319] "Starting node config controller"
	I0722 00:56:29.258439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:56:29.354110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:56:29.354174       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:56:29.358845       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79] <==
	W0722 00:56:12.109737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.109814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.178202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.178244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.211869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:56:12.211965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:56:12.216302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:56:12.216338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:56:12.316439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:56:12.317760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:56:12.350068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:56:12.350160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:56:12.477256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:56:12.477439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:56:12.503262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:56:12.503389       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:56:12.504440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:56:12.504526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:56:12.512048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:56:12.512151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:56:12.565377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.565483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.573320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.573389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0722 00:56:14.294939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:03:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:03:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:03:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:03:22 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:03:22.022758    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:03:33 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:03:33.022362    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:03:45 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:03:45.023889    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:03:56 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:03:56.025601    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:04:07 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:07.023371    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:04:14 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:14.056663    3928 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:04:14 default-k8s-diff-port-214905 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:04:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:04:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:04:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:04:20 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:20.026975    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:04:32 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:32.022547    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:04:46 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:46.023907    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:04:57 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:04:57.023258    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:05:09 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:05:09.022105    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:05:14 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:05:14.053776    3928 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:05:14 default-k8s-diff-port-214905 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:05:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:05:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:05:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:05:21 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:05:21.022556    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:05:32 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:05:32.022544    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	
	
	==> storage-provisioner [e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8] <==
	I0722 00:56:28.957852       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:56:28.993107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:56:28.993232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:56:29.018379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:56:29.020860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525!
	I0722 00:56:29.023316       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"decebcb1-6e67-4b4d-925a-5b81248c4e93", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525 became leader
	I0722 00:56:29.127733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-d4z4t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t: exit status 1 (62.641541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-d4z4t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 00:58:52.192861   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 00:59:24.350450   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 00:59:31.986798   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 00:59:46.543513   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 00:59:54.889359   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:59:55.172327   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:00:15.237229   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:00:51.032750   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:00:55.032591   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:00:57.335386   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:01:09.588812   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:01:10.764162   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:01:17.932894   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:02:14.077086   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:02:33.808115   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:02:54.281839   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:03:01.305866   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:03:52.192608   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:04:31.986716   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:04:46.543833   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:04:54.889559   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 01:04:55.172917   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:05:51.033225   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:06:10.763799   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (225.170411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-366657" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (220.646308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25: (1.552482355s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.218540625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610463218518218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b7b66e8-c003-4832-80ad-54b20efce288 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.219342070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c38dabf9-6744-4b57-bc83-55868913a112 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.219475192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c38dabf9-6744-4b57-bc83-55868913a112 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.219532980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c38dabf9-6744-4b57-bc83-55868913a112 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.249194632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3bede8c-fb51-4972-83ab-5c4db43c2f97 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.249306340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3bede8c-fb51-4972-83ab-5c4db43c2f97 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.250362612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa512dfa-5bc8-48b8-9daa-cde7e38fdd46 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.250786862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610463250762473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa512dfa-5bc8-48b8-9daa-cde7e38fdd46 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.251280432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b756fa0-e3b7-441a-b4a6-7de027eb3271 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.251345962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b756fa0-e3b7-441a-b4a6-7de027eb3271 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.251416547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6b756fa0-e3b7-441a-b4a6-7de027eb3271 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.280854551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8de72192-0baf-4b5c-8d38-794122575922 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.280942275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8de72192-0baf-4b5c-8d38-794122575922 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.281963888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7324cb98-d063-4437-839f-a335d4cf2ca0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.282430951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610463282363012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7324cb98-d063-4437-839f-a335d4cf2ca0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.282862128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6f5da94-2c74-448e-8d17-03a7ff8db435 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.282912232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6f5da94-2c74-448e-8d17-03a7ff8db435 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.282947656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6f5da94-2c74-448e-8d17-03a7ff8db435 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.313459969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=481ac6b0-5833-4edc-b2f6-8c84903aaabe name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.313537650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=481ac6b0-5833-4edc-b2f6-8c84903aaabe name=/runtime.v1.RuntimeService/Version
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.314750944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e8d5a34-4475-4b02-b3bd-864a31b4205f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.315109779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610463315087015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e8d5a34-4475-4b02-b3bd-864a31b4205f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.315631087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca12c70e-1d09-4469-9285-f6279ce32582 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.315682106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca12c70e-1d09-4469-9285-f6279ce32582 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:07:43 old-k8s-version-366657 crio[629]: time="2024-07-22 01:07:43.315714701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca12c70e-1d09-4469-9285-f6279ce32582 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051104] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496567] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.796830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544248] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.276300] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.064156] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073267] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.169185] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.171264] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.282291] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +6.446308] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.069249] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.917900] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[ +11.851684] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 00:54] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul22 00:56] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.066214] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:07:43 up 17 min,  0 users,  load average: 0.02, 0.03, 0.00
	Linux old-k8s-version-366657 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: net.(*sysDialer).dialSerial(0xc0008de000, 0x4f7fe40, 0xc000202c00, 0xc000c9c960, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: net.(*Dialer).DialContext(0xc000c6b260, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c2e090, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c6f3c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c2e090, 0x24, 0x60, 0x7f2101071768, 0x118, ...)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: net/http.(*Transport).dial(0xc000690500, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c2e090, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: net/http.(*Transport).dialConn(0xc000690500, 0x4f7fe00, 0xc000120018, 0x0, 0xc000d89860, 0x5, 0xc000c2e090, 0x24, 0x0, 0xc000cb0240, ...)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: net/http.(*Transport).dialConnFor(0xc000690500, 0xc000bae840)
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]: created by net/http.(*Transport).queueForDial
	Jul 22 01:07:38 old-k8s-version-366657 kubelet[6510]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 22 01:07:38 old-k8s-version-366657 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 22 01:07:38 old-k8s-version-366657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 22 01:07:38 old-k8s-version-366657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 22 01:07:38 old-k8s-version-366657 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 01:07:38 old-k8s-version-366657 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 01:07:39 old-k8s-version-366657 kubelet[6519]: I0722 01:07:39.027759    6519 server.go:416] Version: v1.20.0
	Jul 22 01:07:39 old-k8s-version-366657 kubelet[6519]: I0722 01:07:39.027982    6519 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 01:07:39 old-k8s-version-366657 kubelet[6519]: I0722 01:07:39.029979    6519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 01:07:39 old-k8s-version-366657 kubelet[6519]: W0722 01:07:39.030896    6519 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 22 01:07:39 old-k8s-version-366657 kubelet[6519]: I0722 01:07:39.031039    6519 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (239.152548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-366657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-360389 -n embed-certs-360389
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:13:29.839640573 +0000 UTC m=+6528.548239283
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-360389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-360389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (57.540146ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-360389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-360389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-360389 logs -n 25: (1.895245791s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 01:10 UTC | 22 Jul 24 01:10 UTC |
	| delete  | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 01:10 UTC | 22 Jul 24 01:10 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 01:11 UTC | 22 Jul 24 01:11 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.304262956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610811304238934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7e8098e-d8b6-47e5-aa12-e42819b33ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.304755861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0d384c3-394b-4b11-aa1d-01508da1e68f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.304804214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0d384c3-394b-4b11-aa1d-01508da1e68f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.305117154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0d384c3-394b-4b11-aa1d-01508da1e68f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.346650399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eede8e48-fa4d-46d1-a79e-0f940f9072ad name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.346751693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eede8e48-fa4d-46d1-a79e-0f940f9072ad name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.347971800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55468b80-648d-4add-a08b-1665167a9764 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.348472434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610811348446891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55468b80-648d-4add-a08b-1665167a9764 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.349083987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3bc9fd0-f3c7-461e-accf-01976caec651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.349147725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3bc9fd0-f3c7-461e-accf-01976caec651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.349389478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3bc9fd0-f3c7-461e-accf-01976caec651 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.383504053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62353b82-fafa-4f3a-96b5-032b27500630 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.383578861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62353b82-fafa-4f3a-96b5-032b27500630 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.385261069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=144b38b9-1663-4c0f-8873-e12690c2bbc6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.388127721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610811388094684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=144b38b9-1663-4c0f-8873-e12690c2bbc6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.390230722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bf644d0-281c-485f-9f17-f89fb0b54799 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.390314636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bf644d0-281c-485f-9f17-f89fb0b54799 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.391066891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bf644d0-281c-485f-9f17-f89fb0b54799 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.422947322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=860cefc7-56bb-4d67-86cf-81020a6c6b9e name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.423037841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=860cefc7-56bb-4d67-86cf-81020a6c6b9e name=/runtime.v1.RuntimeService/Version
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.424075490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=378ebf63-5c81-406e-9488-d220aeae037f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.424662361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610811424636862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=378ebf63-5c81-406e-9488-d220aeae037f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.425193241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7880e61-4371-4d52-82a6-2b9ecc16c060 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.425264294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7880e61-4371-4d52-82a6-2b9ecc16c060 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:13:31 embed-certs-360389 crio[721]: time="2024-07-22 01:13:31.425630988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721609490110721642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879affe57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f229c6081d935a975ee7e239526c7d0a9f44f043cdc7a6266155565912b363cb,PodSandboxId:7b1d393663db911bc0907f85b5c7c79659de3ba431679871a54948fac7379d3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721609469280681964,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c23b021a-f68e-40c7-ac17-1ec62007d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 86213cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc,PodSandboxId:eda7d19c94d09f892d095f975472869b33a767597962d6e9bc4b4de5d137abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609466935579709,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7mzsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d43245-3f6c-4d8b-bffa-bc8298b65025,},Annotations:map[string]string{io.kubernetes.container.hash: a0707b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a,PodSandboxId:842461323b73ba75e0e7d441f60ee0c82ab302b3a615dbc5869d7332037a4404,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721609459372224064,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8j7bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167c03f0-5b03-433a-9
51c-229baa23eb02,},Annotations:map[string]string{io.kubernetes.container.hash: 7aff9734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397,PodSandboxId:7eb17818463762e47bc926c7bfbb9f3ab3e337cc037faf1980bfc0e3f77e1fe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721609459296292870,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c76b619-6b7f-45b0-93c2-df9879aff
e57,},Annotations:map[string]string{io.kubernetes.container.hash: 4534287c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24,PodSandboxId:66e3a11ef4d843a168d3750da15a4ef3354149ea9f08fa855d63fbd152b3c225,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609455675586955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc781974ce92ff92256d8d2d6d76d077,},Annotations:map[string]string{io.kub
ernetes.container.hash: 30fd19d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e,PodSandboxId:35d2b53feb9b2411e6fea4cae26ca9704b9ee3278751b0d59a7ccd9363481dff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609455640479511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c50e8fd585c2c29aa684ef590528913,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 60414973,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a,PodSandboxId:a3b49133ad1b8b60fca893c4673f2e5a0cf56b6e67287b84b814c2f4ea3bbe61,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609455643228307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e7df2c2d19498268e0ef65b20005b2,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e,PodSandboxId:1ebf78c891885178423c21dfe5dffc296ae7b95ed94f3ec7d93be573f695a08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609455615904992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-360389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89427a1c4949093b02da2b95b772c63e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7880e61-4371-4d52-82a6-2b9ecc16c060 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8e399257c6a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   7eb1781846376       storage-provisioner
	f229c6081d935       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   7b1d393663db9       busybox
	93b990e487bfd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   eda7d19c94d09       coredns-7db6d8ff4d-7mzsv
	fc4ac4f1206a6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      22 minutes ago      Running             kube-proxy                1                   842461323b73b       kube-proxy-8j7bx
	8efc9587f83d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   7eb1781846376       storage-provisioner
	a6a52deb00960       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   66e3a11ef4d84       etcd-embed-certs-360389
	193fb390e4d47       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      22 minutes ago      Running             kube-controller-manager   1                   a3b49133ad1b8       kube-controller-manager-embed-certs-360389
	62e46b9a1718a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      22 minutes ago      Running             kube-apiserver            1                   35d2b53feb9b2       kube-apiserver-embed-certs-360389
	deb1a27ba8547       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      22 minutes ago      Running             kube-scheduler            1                   1ebf78c891885       kube-scheduler-embed-certs-360389
	
	
	==> coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58239 - 55400 "HINFO IN 7183721124252281798.7244563882075223873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013611028s
	
	
	==> describe nodes <==
	Name:               embed-certs-360389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-360389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=embed-certs-360389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_44_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:44:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-360389
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:13:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:11:54 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:11:54 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:11:54 +0000   Mon, 22 Jul 2024 00:44:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:11:54 +0000   Mon, 22 Jul 2024 00:51:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.32
	  Hostname:    embed-certs-360389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec29f684ac484cc89954b52d4bb590db
	  System UUID:                ec29f684-ac48-4cc8-9954-b52d4bb590db
	  Boot ID:                    2fdd82bf-1aa7-46c3-ac7a-f2195fb3f2aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-7mzsv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-360389                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-360389             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-360389    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-8j7bx                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-360389             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-k68zp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-360389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-360389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-360389 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-360389 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-360389 event: Registered Node embed-certs-360389 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-360389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-360389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-360389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-360389 event: Registered Node embed-certs-360389 in Controller
	
	
	==> dmesg <==
	[Jul22 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060293] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038357] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858457] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.796834] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.519045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.288135] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.063121] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066416] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.225545] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.129536] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.289587] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.388152] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.075987] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.928566] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +4.642618] kauditd_printk_skb: 97 callbacks suppressed
	[Jul22 00:51] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +3.208787] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.917732] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] <==
	{"level":"info","ts":"2024-07-22T00:50:57.367622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgPreVoteResp from af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2024-07-22T00:50:57.367634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgVoteResp from af722703d3b6d364 at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.367659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af722703d3b6d364 elected leader af722703d3b6d364 at term 3"}
	{"level":"info","ts":"2024-07-22T00:50:57.369939Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"af722703d3b6d364","local-member-attributes":"{Name:embed-certs-360389 ClientURLs:[https://192.168.72.32:2379]}","request-path":"/0/members/af722703d3b6d364/attributes","cluster-id":"69693fe7a610a475","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:50:57.370071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:50:57.370123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:50:57.371451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:50:57.371476Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:50:57.371919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.32:2379"}
	{"level":"info","ts":"2024-07-22T00:50:57.374326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-22T00:51:15.636064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.165041ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15232459096717954203 > lease_revoke:<id:536490d7eb8a8b71>","response":"size:28"}
	{"level":"info","ts":"2024-07-22T00:51:15.636336Z","caller":"traceutil/trace.go:171","msg":"trace[551364333] linearizableReadLoop","detail":"{readStateIndex:595; appliedIndex:594; }","duration":"265.17263ms","start":"2024-07-22T00:51:15.371135Z","end":"2024-07-22T00:51:15.636308Z","steps":["trace[551364333] 'read index received'  (duration: 9.328515ms)","trace[551364333] 'applied index is now lower than readState.Index'  (duration: 255.842329ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:51:15.636653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.480227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-k68zp\" ","response":"range_response_count:1 size:4281"}
	{"level":"info","ts":"2024-07-22T00:51:15.63672Z","caller":"traceutil/trace.go:171","msg":"trace[2032112048] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-k68zp; range_end:; response_count:1; response_revision:562; }","duration":"265.604616ms","start":"2024-07-22T00:51:15.371104Z","end":"2024-07-22T00:51:15.636709Z","steps":["trace[2032112048] 'agreement among raft nodes before linearized reading'  (duration: 265.37298ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:00:57.400834Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":806}
	{"level":"info","ts":"2024-07-22T01:00:57.411776Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":806,"took":"10.094727ms","hash":2900274465,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2756608,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-22T01:00:57.411929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2900274465,"revision":806,"compact-revision":-1}
	{"level":"info","ts":"2024-07-22T01:05:57.412848Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2024-07-22T01:05:57.416798Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1048,"took":"3.476078ms","hash":3248716011,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1691648,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-22T01:05:57.416869Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3248716011,"revision":1048,"compact-revision":806}
	{"level":"info","ts":"2024-07-22T01:10:57.419665Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1292}
	{"level":"info","ts":"2024-07-22T01:10:57.423779Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1292,"took":"3.793122ms","hash":2974687485,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-22T01:10:57.423822Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2974687485,"revision":1292,"compact-revision":1048}
	
	
	==> kernel <==
	 01:13:31 up 23 min,  0 users,  load average: 0.26, 0.69, 0.90
	Linux embed-certs-360389 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] <==
	I0722 01:06:59.714326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:08:59.714064       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:08:59.714170       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:08:59.714182       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:08:59.715426       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:08:59.715552       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:08:59.715574       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:10:58.717239       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:10:58.717643       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 01:10:59.717905       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:10:59.718071       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:10:59.718102       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:10:59.718231       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:10:59.718279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:10:59.719435       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:11:59.718425       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:11:59.718657       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:11:59.718688       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:11:59.719887       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:11:59.719947       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:11:59.719974       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] <==
	I0722 01:07:42.302843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:11.801439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:08:12.309911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:41.805600       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:08:42.318178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:11.810576       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:09:12.326211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:41.815550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:09:42.334294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:10:11.821043       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:10:12.342600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:10:41.825519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:10:42.351060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:11:11.831514       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:11:12.359521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:11:41.835950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:11:42.368274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:12:11.841142       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:12:12.376668       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:12:21.897905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="232.349µs"
	I0722 01:12:36.899580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="60.58µs"
	E0722 01:12:41.846137       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:12:42.383223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:13:11.851099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:13:12.392207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] <==
	I0722 00:50:59.546436       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:50:59.559114       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.32"]
	I0722 00:50:59.624579       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:50:59.624701       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:50:59.624741       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:50:59.631681       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:50:59.632069       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:50:59.632149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:50:59.634021       1 config.go:192] "Starting service config controller"
	I0722 00:50:59.634111       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:50:59.634210       1 config.go:319] "Starting node config controller"
	I0722 00:50:59.634263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:50:59.634499       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:50:59.634528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:50:59.735692       1 shared_informer.go:320] Caches are synced for node config
	I0722 00:50:59.735818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:50:59.735846       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] <==
	I0722 00:50:56.376077       1 serving.go:380] Generated self-signed cert in-memory
	W0722 00:50:58.674942       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 00:50:58.675099       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:50:58.675184       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 00:50:58.675209       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 00:50:58.710051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 00:50:58.710098       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:50:58.714206       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 00:50:58.715585       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 00:50:58.715623       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 00:50:58.715644       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 00:50:58.816499       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:11:00 embed-certs-360389 kubelet[930]: E0722 01:11:00.882040     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:11:15 embed-certs-360389 kubelet[930]: E0722 01:11:15.882543     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:11:27 embed-certs-360389 kubelet[930]: E0722 01:11:27.883588     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:11:42 embed-certs-360389 kubelet[930]: E0722 01:11:42.883723     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:11:54 embed-certs-360389 kubelet[930]: E0722 01:11:54.897099     930 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:11:54 embed-certs-360389 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:11:54 embed-certs-360389 kubelet[930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:11:54 embed-certs-360389 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:11:54 embed-certs-360389 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:11:55 embed-certs-360389 kubelet[930]: E0722 01:11:55.882165     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:12:10 embed-certs-360389 kubelet[930]: E0722 01:12:10.895264     930 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 01:12:10 embed-certs-360389 kubelet[930]: E0722 01:12:10.895667     930 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 22 01:12:10 embed-certs-360389 kubelet[930]: E0722 01:12:10.896198     930 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz4cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-k68zp_kube-system(9d851e83-b647-4e9e-a098-45c8b9d10323): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 22 01:12:10 embed-certs-360389 kubelet[930]: E0722 01:12:10.896337     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:12:21 embed-certs-360389 kubelet[930]: E0722 01:12:21.883180     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:12:36 embed-certs-360389 kubelet[930]: E0722 01:12:36.882136     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:12:48 embed-certs-360389 kubelet[930]: E0722 01:12:48.882096     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:12:54 embed-certs-360389 kubelet[930]: E0722 01:12:54.896955     930 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:12:54 embed-certs-360389 kubelet[930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:12:54 embed-certs-360389 kubelet[930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:12:54 embed-certs-360389 kubelet[930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:12:54 embed-certs-360389 kubelet[930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:13:03 embed-certs-360389 kubelet[930]: E0722 01:13:03.881751     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:13:15 embed-certs-360389 kubelet[930]: E0722 01:13:15.881688     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	Jul 22 01:13:29 embed-certs-360389 kubelet[930]: E0722 01:13:29.881843     930 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-k68zp" podUID="9d851e83-b647-4e9e-a098-45c8b9d10323"
	
	
	==> storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] <==
	I0722 00:50:59.438639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0722 00:51:29.443418       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] <==
	I0722 00:51:30.210014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:51:30.225628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:51:30.225806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:51:47.626322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:51:47.627566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4!
	I0722 00:51:47.630660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aad4fa1f-009e-4076-a42a-18ba9d82c0b7", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4 became leader
	I0722 00:51:47.728853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-360389_47516c09-f34b-4973-966b-b31bc0bbc2c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-360389 -n embed-certs-360389
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-360389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-k68zp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp: exit status 1 (59.324182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-k68zp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-360389 describe pod metrics-server-569cc877fc-k68zp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (348.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-945581 -n no-preload-945581
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:10:23.648703177 +0000 UTC m=+6342.357301894
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-945581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-945581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.55µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-945581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-945581 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-945581 logs -n 25: (2.14562541s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 01:10 UTC | 22 Jul 24 01:10 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.212096371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610625212068799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f45454b1-c398-414c-b138-aa837080ea2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.212603605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a053d27-c04e-41a7-b25f-849ddf451b94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.212664687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a053d27-c04e-41a7-b25f-849ddf451b94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.212865712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a053d27-c04e-41a7-b25f-849ddf451b94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.224711948Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c53a2563-b618-4855-aee5-b4a0f4fe33ab name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.225015034Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-68wll,Uid:0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609727712893727,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.497455545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-9j27w,Uid:6979f6f9-75ac-49d9-adaf-71524576aad3,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609727694781471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.485946195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc4772de1eaf53303556cdaa286523415725fefbb827371fc5f9043736520281,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-l858z,Uid:0f17da27-a5bf-46ea-bbb8-00ee2f308542,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609726520748152,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-l858z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f17da27-a5bf-46ea-bbb8-00ee2f308542,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:26.205271795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0448fcfd-604d-47b4-822e-bc0d117d3b2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609726412490680,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-22T00:55:26.103803143Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&PodSandboxMetadata{Name:kube-proxy-g56gz,Uid:81c84dcd-74b2-44b3-b25e-4074cfe2881d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609725298016138,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-22T00:55:24.982519062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-945581,Uid:78a3bc5c3e001457a5031a7022a013a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714313237612,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 78a3bc5c3e001457a5031a7022a013a4,kubernetes.io/config.seen: 2024-07-22T00:55:13.861992598Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&PodSandboxMetadata{Name:kube-controller-m
anager-no-preload-945581,Uid:ffbf4901cbdfd3f44f04f34ad80ba5ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714311462747,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ffbf4901cbdfd3f44f04f34ad80ba5ce,kubernetes.io/config.seen: 2024-07-22T00:55:13.861990556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-945581,Uid:d933df5461e83068804e0d24b2eeaa8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721609714309937420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.251:2379,kubernetes.io/config.hash: d933df5461e83068804e0d24b2eeaa8b,kubernetes.io/config.seen: 2024-07-22T00:55:13.861984297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-945581,Uid:66a4fbf4e1b85a82bdfb3c5a3c11917d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721609714299302531,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.251:844
3,kubernetes.io/config.hash: 66a4fbf4e1b85a82bdfb3c5a3c11917d,kubernetes.io/config.seen: 2024-07-22T00:55:13.861988883Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-945581,Uid:66a4fbf4e1b85a82bdfb3c5a3c11917d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721609424738657192,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.251:8443,kubernetes.io/config.hash: 66a4fbf4e1b85a82bdfb3c5a3c11917d,kubernetes.io/config.seen: 2024-07-22T00:50:24.253368689Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=c53a2563-b618-4855-aee5-b4a0f4fe33ab name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.225823184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68286b3e-523c-4d96-968f-9bce0f602ba1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.225892562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68286b3e-523c-4d96-968f-9bce0f602ba1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.226095853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68286b3e-523c-4d96-968f-9bce0f602ba1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.253391640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a72bec09-7ad5-4ba3-ada3-6bd310cb84f6 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.253490951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a72bec09-7ad5-4ba3-ada3-6bd310cb84f6 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.255059974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89e37957-f17b-4d11-b09f-2733a55b91fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.255458106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610625255430866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89e37957-f17b-4d11-b09f-2733a55b91fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.256270796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9777e2a5-ae67-4732-8e00-e5f39a617db9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.256331769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9777e2a5-ae67-4732-8e00-e5f39a617db9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.256765414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9777e2a5-ae67-4732-8e00-e5f39a617db9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.297510121Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=1cda5bb5-4bd9-42d5-86a7-d771ee094143 name=/runtime.v1.RuntimeService/Status
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.297656732Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1cda5bb5-4bd9-42d5-86a7-d771ee094143 name=/runtime.v1.RuntimeService/Status
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.300971487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3c4764b-37c8-4015-9c5b-322d6a029e8a name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.301071198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3c4764b-37c8-4015-9c5b-322d6a029e8a name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.306851732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=369e2dd5-7440-426f-844d-ffff7677d1b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.307345552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610625307311712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=369e2dd5-7440-426f-844d-ffff7677d1b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.308375561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5826314f-4aa1-4a0f-a172-cac4dd013b29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.308509442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5826314f-4aa1-4a0f-a172-cac4dd013b29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:25 no-preload-945581 crio[715]: time="2024-07-22 01:10:25.309086612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5,PodSandboxId:1ec0525f0da3798634d704fe2073d21b32b4ae8ef9d9afa4534082ddda870a81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727980174760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-68wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9fbbef-f095-45c2-ae45-2c4be3a22e0d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b,PodSandboxId:39220f03453a969a4df862e8a19f3fc13ddcbc413c4c34cee71b44efbb71dc7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609727954431430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9j27w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6979f6f9-75ac-49d9-adaf-71524576aad3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4,PodSandboxId:1b16938ff7bcd6259c889d45b9f49c629da49b7911aff1fc199dd9b4bf890244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721609726585326384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0448fcfd-604d-47b4-822e-bc0d117d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4,PodSandboxId:2b43b946ec07a8023b31b3d73d5720624f903aca9803f31a7bfa0baacecb6b1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721609725416983857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g56gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c84dcd-74b2-44b3-b25e-4074cfe2881d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8,PodSandboxId:a751729723ef90150209c0244bb08ded6d26a7cddcfb1ea1eea6cf68dcc6427e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721609714547651719,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78a3bc5c3e001457a5031a7022a013a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba,PodSandboxId:d7a60351cd728c7e270a12f10caae49d8e5547eb2deac62fd40a42ba204b34bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721609714578546198,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933df5461e83068804e0d24b2eeaa8b,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6,PodSandboxId:e4bddabdca8551bbd2b1c99573a7d588e112abeb628e8911e2c50cea968e34f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721609714494323709,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffbf4901cbdfd3f44f04f34ad80ba5ce,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30,PodSandboxId:41a74b5018194f489464cf1a0e89fd7be120fccefcdc0131820601e32071f2f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721609714443498839,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf,PodSandboxId:f6f719f80db34f429d601cfa8a0e6b9eaeabeb33ad3905e6a28c271f4c98d983,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721609424925744005,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-945581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a4fbf4e1b85a82bdfb3c5a3c11917d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5826314f-4aa1-4a0f-a172-cac4dd013b29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddb5673ebc910       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   1ec0525f0da37       coredns-5cfdc65f69-68wll
	c15b7cf4a9c99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   39220f03453a9       coredns-5cfdc65f69-9j27w
	901b26fcd1ca9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   1b16938ff7bcd       storage-provisioner
	dbe524b3dbde3       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   2b43b946ec07a       kube-proxy-g56gz
	af839eb6670b9       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 minutes ago      Running             etcd                      2                   d7a60351cd728       etcd-no-preload-945581
	945ddd91e654d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 minutes ago      Running             kube-scheduler            2                   a751729723ef9       kube-scheduler-no-preload-945581
	4e13520bd3680       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 minutes ago      Running             kube-controller-manager   2                   e4bddabdca855       kube-controller-manager-no-preload-945581
	e1ee8c2526929       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 minutes ago      Running             kube-apiserver            2                   41a74b5018194       kube-apiserver-no-preload-945581
	d165172b79f19       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   20 minutes ago      Exited              kube-apiserver            1                   f6f719f80db34       kube-apiserver-no-preload-945581
	
	
	==> coredns [c15b7cf4a9c9968a892ecde2c61f566e0b1fe0771c9aeb53794e5c1e34dce53b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ddb5673ebc91074ae8f16fece09a974df9fab307f4905a0ad9f7c0f8dbc436e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-945581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-945581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=no-preload-945581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:55:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-945581
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:10:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:05:42 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:05:42 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:05:42 +0000   Mon, 22 Jul 2024 00:55:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:05:42 +0000   Mon, 22 Jul 2024 00:55:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    no-preload-945581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a82bdef081e54ecbb38e19ac2a58d2df
	  System UUID:                a82bdef0-81e5-4ecb-b38e-19ac2a58d2df
	  Boot ID:                    2b3f0c55-5d35-4493-bb2f-e403074cac36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-68wll                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5cfdc65f69-9j27w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-945581                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-945581             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-945581    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-g56gz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-945581             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-78fcd8795b-l858z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node no-preload-945581 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-945581 event: Registered Node no-preload-945581 in Controller
	
	
	==> dmesg <==
	[  +0.050401] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038106] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.465426] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.710945] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jul22 00:50] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.524360] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.056029] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062446] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.167060] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.161401] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.280392] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[ +14.231418] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +0.059555] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.533069] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +5.691232] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.688230] kauditd_printk_skb: 86 callbacks suppressed
	[Jul22 00:55] systemd-fstab-generator[2909]: Ignoring "noauto" option for root device
	[  +0.063959] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.483086] systemd-fstab-generator[3230]: Ignoring "noauto" option for root device
	[  +0.075754] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.412438] systemd-fstab-generator[3345]: Ignoring "noauto" option for root device
	[  +0.095544] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.885626] kauditd_printk_skb: 90 callbacks suppressed
	
	
	==> etcd [af839eb6670b9805792dfe3f030640b4672a0265778c19189021456b4bf0f7ba] <==
	{"level":"info","ts":"2024-07-22T00:55:15.456692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T00:55:15.456722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgPreVoteResp from 439bb489ce44e0e1 at term 1"}
	{"level":"info","ts":"2024-07-22T00:55:15.456737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.456742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgVoteResp from 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.45675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.456757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 439bb489ce44e0e1 elected leader 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2024-07-22T00:55:15.460793Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.461014Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"439bb489ce44e0e1","local-member-attributes":"{Name:no-preload-945581 ClientURLs:[https://192.168.50.251:2379]}","request-path":"/0/members/439bb489ce44e0e1/attributes","cluster-id":"dd9b68cf7bac6d9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:55:15.461177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:55:15.462002Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:55:15.46415Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:55:15.464188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd9b68cf7bac6d9","local-member-id":"439bb489ce44e0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.471658Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.471697Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:55:15.464451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:55:15.471725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T00:55:15.466306Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-22T00:55:15.472365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.251:2379"}
	{"level":"info","ts":"2024-07-22T00:55:15.474739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T01:05:15.534261Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":726}
	{"level":"info","ts":"2024-07-22T01:05:15.543511Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":726,"took":"8.515894ms","hash":1293828335,"current-db-size-bytes":2396160,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2396160,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-22T01:05:15.543647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1293828335,"revision":726,"compact-revision":-1}
	{"level":"info","ts":"2024-07-22T01:10:15.542404Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-07-22T01:10:15.546956Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":969,"took":"3.747488ms","hash":2944806042,"current-db-size-bytes":2396160,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-22T01:10:15.547053Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2944806042,"revision":969,"compact-revision":726}
	
	
	==> kernel <==
	 01:10:25 up 20 min,  0 users,  load average: 0.02, 0.09, 0.09
	Linux no-preload-945581 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d165172b79f1915dbebe6fea35be080752c4469f8da221be7f4de3a7ccebfdcf] <==
	W0722 00:55:05.941018       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:05.953782       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.045891       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.388463       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.440713       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.501201       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.590870       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.711221       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:09.756924       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.030308       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.084339       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.114948       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.267976       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.280726       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.368626       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.429997       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.457955       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.520317       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.723906       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.726247       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.747542       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.812061       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.823486       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.889069       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0722 00:55:10.929539       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e1ee8c2526929084b4ef871554e26110239564f73a7ddb95c56917f804312b30] <==
	I0722 01:06:18.168619       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 01:06:18.170638       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:08:18.168888       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:08:18.169091       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0722 01:08:18.170341       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:08:18.171509       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:08:18.171650       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 01:08:18.172896       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:10:17.170371       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:10:17.170856       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 01:10:18.173013       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:10:18.173107       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0722 01:10:18.173160       1 handler_proxy.go:99] no RequestInfo found in the context
	E0722 01:10:18.173192       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0722 01:10:18.174241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0722 01:10:18.174302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e13520bd3680930dcc2e93ab24dbf4842f6196ef413797266e3136971ce56b6] <==
	I0722 01:05:25.188496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:05:25.196305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:05:42.711841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-945581"
	I0722 01:05:55.196677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:05:55.201456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:06:25.208649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:06:25.209695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:06:34.991278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="173.926µs"
	I0722 01:06:48.991917       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="126.114µs"
	E0722 01:06:55.216668       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:06:55.220205       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:07:25.224349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:07:25.229350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:07:55.232468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:07:55.237806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:25.240998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:08:25.246369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:55.247687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:08:55.256258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:25.254524       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:09:25.263013       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:55.261980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:09:55.272004       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:10:25.269725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0722 01:10:25.281071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [dbe524b3dbde34266aa37faff5943ec8e3e5dc7669fc00b44225d0a0399dbec4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0722 00:55:25.765206       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0722 00:55:25.777051       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.251"]
	E0722 00:55:25.777121       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0722 00:55:25.838662       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0722 00:55:25.838709       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:55:25.838774       1 server_linux.go:170] "Using iptables Proxier"
	I0722 00:55:25.843119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0722 00:55:25.843399       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0722 00:55:25.843424       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:55:25.845257       1 config.go:197] "Starting service config controller"
	I0722 00:55:25.845296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:55:25.845319       1 config.go:104] "Starting endpoint slice config controller"
	I0722 00:55:25.845324       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:55:25.854102       1 config.go:326] "Starting node config controller"
	I0722 00:55:25.854219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:55:25.945984       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:55:25.946068       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:55:25.954259       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [945ddd91e654d22c8f63fb4372ce68379a073dc68cb535f393a0664b9e5e1ad8] <==
	W0722 00:55:17.179013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:55:17.179040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:17.179239       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:55:17.179270       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 00:55:17.180230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:55:17.180263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:17.181849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:17.181880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.018618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:55:18.018758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.050255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:55:18.050311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.062418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.062467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.198325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:55:18.198371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.231369       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:55:18.231466       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0722 00:55:18.264094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.264230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.267702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.267837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0722 00:55:18.345298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:55:18.345415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0722 00:55:20.262453       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]: E0722 01:08:20.031994    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:08:20 no-preload-945581 kubelet[3237]: E0722 01:08:20.973691    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:08:34 no-preload-945581 kubelet[3237]: E0722 01:08:34.974186    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:08:48 no-preload-945581 kubelet[3237]: E0722 01:08:48.973895    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:09:01 no-preload-945581 kubelet[3237]: E0722 01:09:01.974769    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:09:16 no-preload-945581 kubelet[3237]: E0722 01:09:16.973498    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:09:20 no-preload-945581 kubelet[3237]: E0722 01:09:20.031908    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:09:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:09:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:09:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:09:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:09:31 no-preload-945581 kubelet[3237]: E0722 01:09:31.973211    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:09:42 no-preload-945581 kubelet[3237]: E0722 01:09:42.973984    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:09:53 no-preload-945581 kubelet[3237]: E0722 01:09:53.973904    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:10:08 no-preload-945581 kubelet[3237]: E0722 01:10:08.973781    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	Jul 22 01:10:20 no-preload-945581 kubelet[3237]: E0722 01:10:20.031781    3237 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:10:20 no-preload-945581 kubelet[3237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:10:20 no-preload-945581 kubelet[3237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:10:20 no-preload-945581 kubelet[3237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:10:20 no-preload-945581 kubelet[3237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:10:21 no-preload-945581 kubelet[3237]: E0722 01:10:21.975815    3237 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l858z" podUID="0f17da27-a5bf-46ea-bbb8-00ee2f308542"
	
	
	==> storage-provisioner [901b26fcd1ca9bc7aec7ec36c4b66faa82406fad6023b175dc7a63afbcaa4be4] <==
	I0722 00:55:26.693365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:55:26.708482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:55:26.708633       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:55:26.719741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:55:26.720475       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a68e8e9-db17-44cc-b224-e2d6df163c4e", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c became leader
	I0722 00:55:26.720515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c!
	I0722 00:55:26.821291       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-945581_8810de60-9f6d-46bf-99a2-c9646514563c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-945581 -n no-preload-945581
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-945581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-l858z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z: exit status 1 (63.121846ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-l858z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-945581 describe pod metrics-server-78fcd8795b-l858z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (348.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (354.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-22 01:11:24.779468712 +0000 UTC m=+6403.488067458
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-214905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.767µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-214905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-214905 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-214905 logs -n 25: (2.008393627s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 01:10 UTC | 22 Jul 24 01:10 UTC |
	| delete  | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 01:10 UTC | 22 Jul 24 01:10 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.241621266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610686241595904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=586ae559-891f-46ef-abb4-e32864f2527a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.242216520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b69d99bd-5a93-432d-b02c-cd2c11b428b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.242279986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b69d99bd-5a93-432d-b02c-cd2c11b428b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.242468662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b69d99bd-5a93-432d-b02c-cd2c11b428b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.281558462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8e57ec4-e768-4212-8a17-75c25856ba96 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.281642064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8e57ec4-e768-4212-8a17-75c25856ba96 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.282806941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87b1244f-504e-477c-82e8-d41b6ebb01e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.283368632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610686283344932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87b1244f-504e-477c-82e8-d41b6ebb01e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.283921511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d49240b7-10e5-4c30-b0ac-2954f811fe3f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.283985698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d49240b7-10e5-4c30-b0ac-2954f811fe3f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.284213370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d49240b7-10e5-4c30-b0ac-2954f811fe3f name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.325096799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c77c6f47-201c-4f8e-932b-7a0225fac18f name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.325177335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c77c6f47-201c-4f8e-932b-7a0225fac18f name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.326249604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91c2194c-a75d-4d88-9e93-fcbe0816df4d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.326642639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610686326621526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91c2194c-a75d-4d88-9e93-fcbe0816df4d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.327071531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b58e58c-2c91-468f-8072-f431339f5fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.327121709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b58e58c-2c91-468f-8072-f431339f5fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.327305806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b58e58c-2c91-468f-8072-f431339f5fc9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.359819819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=276864e0-c137-4524-bbde-21bbe225c18c name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.359893367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=276864e0-c137-4524-bbde-21bbe225c18c name=/runtime.v1.RuntimeService/Version
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.360951660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a49c2efd-dc2c-4a6d-8c87-583f3dc03a35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.361934722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610686361908395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a49c2efd-dc2c-4a6d-8c87-583f3dc03a35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.362406503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d690974d-4f5e-4cfb-854f-00f1f74c8c19 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.362463448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d690974d-4f5e-4cfb-854f-00f1f74c8c19 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:11:26 default-k8s-diff-port-214905 crio[719]: time="2024-07-22 01:11:26.362639516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25,PodSandboxId:e735873e2db9aadf917b033cb16d5d4bf65b383f8345aee7343df38e2c0d7983,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788829841455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-phh59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f48ef56-5d78-4a1b-b53b-b99a03114323,},Annotations:map[string]string{io.kubernetes.container.hash: 128b519c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8,PodSandboxId:0bfd753b52d3820f0917b2b351f850b9538fbb04cb783c4b9f3a2702375ad623,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721609788774554385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4gv5m,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6db8dadd-0345-4eef-a024-bdaf97146e30,},Annotations:map[string]string{io.kubernetes.container.hash: da029baa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8,PodSandboxId:5e4532dd14faac1d844244fc146516c8fd9c48f9404c64d739d94a2cf6a0a99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721609788605453703,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b4fe1-79af-497d-8119-7ad60547fefe,},Annotations:map[string]string{io.kubernetes.container.hash: 9443e13c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0,PodSandboxId:a38669a4c258ccf5eb4b22ed68a9cb59f22a7e825fe86ab756d9b91e12a5f6cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721609788476838206,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f938f331-504a-40f0-8b44-4b23cd07a93e,},Annotations:map[string]string{io.kubernetes.container.hash: 315d3f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79,PodSandboxId:23b1ce2239dba9552d864647bf5bf029908ed7bc419d4733edd7f20c3f28afc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721609768751498094
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 406178438c6ef73e2da4b188e37d6794,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b,PodSandboxId:e44bbad7456e9a0c70662b96e7b87afc623660702022fc81ed97718bdb6e4dab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721609768750272556,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb48ff9ac06aa69ffbd43f050240766,},Annotations:map[string]string{io.kubernetes.container.hash: 3dc05171,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3,PodSandboxId:d2efa81aad4207c21b93671c74f8edf528927fa59e7df6d981029fb9d6afe7ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721609768684958347,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 485f3955bd335159a10fad46278afdb7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228,PodSandboxId:82bf5b759b253058214c5d64f46b6b2a250a839f0401e910a14adfc38f056838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721609768667641352,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-214905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7faf4ae4ab0a5aa089d38a53b3f4f063,},Annotations:map[string]string{io.kubernetes.container.hash: bef12e1b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d690974d-4f5e-4cfb-854f-00f1f74c8c19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1e5f8f01efbd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   e735873e2db9a       coredns-7db6d8ff4d-phh59
	d6f0c65dbc052       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   0bfd753b52d38       coredns-7db6d8ff4d-4gv5m
	e30b46dc67de8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   5e4532dd14faa       storage-provisioner
	5e711aaab81f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   14 minutes ago      Running             kube-proxy                0                   a38669a4c258c       kube-proxy-th55d
	f1bbb980156be       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   23b1ce2239dba       kube-scheduler-default-k8s-diff-port-214905
	8932ca8211a6f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   e44bbad7456e9       etcd-default-k8s-diff-port-214905
	7a7d6a0fb3fa2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   d2efa81aad420       kube-controller-manager-default-k8s-diff-port-214905
	c698a466ba3cb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   82bf5b759b253       kube-apiserver-default-k8s-diff-port-214905
	
	
	==> coredns [a1e5f8f01efbd36b6a27ed757573b4141e99a40b47e679a4231817c8181a3f25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d6f0c65dbc052297588a34976fa5e278f92dbd1609432c9ec4e456c234f331e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-214905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-214905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=default-k8s-diff-port-214905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-214905
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:11:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:06:46 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:06:46 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:06:46 +0000   Mon, 22 Jul 2024 00:56:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:06:46 +0000   Mon, 22 Jul 2024 00:56:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.97
	  Hostname:    default-k8s-diff-port-214905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb045cc0de4f4a91b8f64fe03eb3641b
	  System UUID:                fb045cc0-de4f-4a91-b8f6-4fe03eb3641b
	  Boot ID:                    07d950fa-0a86-4eb0-81fa-058c796af7b9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4gv5m                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-phh59                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-214905                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-214905             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-214905    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-th55d                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-214905             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-d4z4t                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node default-k8s-diff-port-214905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-214905 event: Registered Node default-k8s-diff-port-214905 in Controller
	
	
	==> dmesg <==
	[  +0.039030] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.681094] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.776659] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.322742] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul22 00:51] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.065697] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056100] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.169027] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.134598] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.264476] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.259126] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +1.899721] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +0.059000] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.542682] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.552039] kauditd_printk_skb: 79 callbacks suppressed
	[ +24.368066] kauditd_printk_skb: 2 callbacks suppressed
	[Jul22 00:56] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.572760] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +4.911037] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.630436] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +13.364279] systemd-fstab-generator[4132]: Ignoring "noauto" option for root device
	[  +0.071880] kauditd_printk_skb: 14 callbacks suppressed
	[Jul22 00:57] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [8932ca8211a6f88c49f0d0b05f29e8e463d1428203e0c0eb686d183579c06f0b] <==
	{"level":"info","ts":"2024-07-22T00:56:09.179456Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.97:2380"}
	{"level":"info","ts":"2024-07-22T00:56:09.364095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d received MsgPreVoteResp from 94e51bf1f139c13d at term 1"}
	{"level":"info","ts":"2024-07-22T00:56:09.364293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d received MsgVoteResp from 94e51bf1f139c13d at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e51bf1f139c13d became leader at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.364374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94e51bf1f139c13d elected leader 94e51bf1f139c13d at term 2"}
	{"level":"info","ts":"2024-07-22T00:56:09.368847Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.370069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94e51bf1f139c13d","local-member-attributes":"{Name:default-k8s-diff-port-214905 ClientURLs:[https://192.168.61.97:2379]}","request-path":"/0/members/94e51bf1f139c13d/attributes","cluster-id":"864778fba5227de3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T00:56:09.37019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:56:09.370492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T00:56:09.373102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"864778fba5227de3","local-member-id":"94e51bf1f139c13d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.373234Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.373289Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T00:56:09.374919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T00:56:09.376576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.97:2379"}
	{"level":"info","ts":"2024-07-22T00:56:09.37782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T00:56:09.379052Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T01:06:09.665326Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-07-22T01:06:09.674561Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":723,"took":"8.599685ms","hash":2853876890,"current-db-size-bytes":2412544,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2412544,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-22T01:06:09.674661Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2853876890,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-07-22T01:11:09.675261Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-07-22T01:11:09.67913Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":966,"took":"3.342814ms","hash":3091229657,"current-db-size-bytes":2412544,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-22T01:11:09.679214Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091229657,"revision":966,"compact-revision":723}
	
	
	==> kernel <==
	 01:11:26 up 20 min,  0 users,  load average: 0.16, 0.14, 0.10
	Linux default-k8s-diff-port-214905 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c698a466ba3cbcaf3aa0cc9e849e15316d1458f0bf029f29cf8a62047f3a9228] <==
	I0722 01:06:12.297176       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:07:12.296365       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:07:12.296679       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:07:12.296736       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:07:12.297515       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:07:12.297685       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:07:12.297742       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:09:12.297144       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:09:12.297229       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:09:12.297241       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:09:12.298338       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:09:12.298467       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:09:12.298495       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0722 01:11:11.302122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:11:11.302883       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0722 01:11:12.303821       1 handler_proxy.go:93] no RequestInfo found in the context
	W0722 01:11:12.303861       1 handler_proxy.go:93] no RequestInfo found in the context
	E0722 01:11:12.304126       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0722 01:11:12.304168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0722 01:11:12.303961       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0722 01:11:12.306160       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7a7d6a0fb3fa247036818ffe164ad644284522d969c44f47d4c71fe99524d6f3] <==
	I0722 01:05:27.669290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:05:57.090082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:05:57.677131       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:06:27.095712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:06:27.685831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:06:57.101229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:06:57.693900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:07:24.039616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="294.118µs"
	E0722 01:07:27.106557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:07:27.702436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0722 01:07:37.036905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="154.846µs"
	E0722 01:07:57.112082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:07:57.709182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:27.118149       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:08:27.717715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:08:57.124963       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:08:57.725066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:27.129921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:09:27.733235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:09:57.136070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:09:57.741057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:10:27.145957       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:10:27.748535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0722 01:10:57.151202       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0722 01:10:57.758282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5e711aaab81f48fd9ffec40d82571db5152ff6f5e369878976fa1e57e91f58d0] <==
	I0722 00:56:29.030644       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:56:29.081602       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.97"]
	I0722 00:56:29.247727       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:56:29.247772       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:56:29.247789       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:56:29.250099       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:56:29.250336       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:56:29.250545       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:56:29.253550       1 config.go:192] "Starting service config controller"
	I0722 00:56:29.253593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:56:29.253628       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:56:29.253644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:56:29.256450       1 config.go:319] "Starting node config controller"
	I0722 00:56:29.258439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:56:29.354110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:56:29.354174       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:56:29.358845       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f1bbb980156be2f258c79fc75ca597e177224fe0369e3e4c586f04c348f21d79] <==
	W0722 00:56:12.109737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.109814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.178202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.178244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.211869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:56:12.211965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:56:12.216302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:56:12.216338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:56:12.316439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:56:12.317760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 00:56:12.350068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:56:12.350160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:56:12.477256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 00:56:12.477439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 00:56:12.503262       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:56:12.503389       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:56:12.504440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:56:12.504526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:56:12.512048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:56:12.512151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:56:12.565377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.565483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 00:56:12.573320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:56:12.573389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0722 00:56:14.294939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:09:14 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:09:14.051925    3928 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:09:14 default-k8s-diff-port-214905 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:09:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:09:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:09:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:09:18 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:09:18.023522    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:09:33 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:09:33.022710    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:09:45 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:09:45.023168    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:09:56 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:09:56.022820    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:10:10 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:10:10.022421    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:10:14 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:10:14.054182    3928 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:10:14 default-k8s-diff-port-214905 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:10:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:10:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:10:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:10:21 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:10:21.022964    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:10:36 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:10:36.024171    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:10:50 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:10:50.023123    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:11:01 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:11:01.023061    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	Jul 22 01:11:14 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:11:14.051271    3928 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:11:14 default-k8s-diff-port-214905 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:11:14 default-k8s-diff-port-214905 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:11:14 default-k8s-diff-port-214905 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:11:14 default-k8s-diff-port-214905 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:11:16 default-k8s-diff-port-214905 kubelet[3928]: E0722 01:11:16.025097    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d4z4t" podUID="f1a411a0-2d46-4c04-9922-eb4046852082"
	
	
	==> storage-provisioner [e30b46dc67de82ceb6948ad71629f98e316bd804e132c1522c082fc395ee5ab8] <==
	I0722 00:56:28.957852       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 00:56:28.993107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 00:56:28.993232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 00:56:29.018379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 00:56:29.020860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525!
	I0722 00:56:29.023316       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"decebcb1-6e67-4b4d-925a-5b81248c4e93", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525 became leader
	I0722 00:56:29.127733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-214905_80549e04-a5ce-4460-8313-f0e1c2be1525!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-d4z4t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t: exit status 1 (62.086914ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-d4z4t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-214905 describe pod metrics-server-569cc877fc-d4z4t: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (354.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:07:54.281930   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:07:58.221012   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:08:01.305979   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:08:52.193373   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:09:31.986579   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:09:46.543812   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:09:54.889137   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
E0722 01:09:55.172516   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.174:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (229.778575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-366657" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-366657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-366657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.297µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-366657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (225.028437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-366657 logs -n 25: (1.630169091s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-590595             | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-590595                  | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-590595 --memory=2200 --alsologtostderr   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-945581             | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC | 22 Jul 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-945581                                   | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-590595 image list                           | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p newest-cni-590595                                   | newest-cni-590595            | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-934399 | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:43 UTC |
	|         | disable-driver-mounts-934399                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:43 UTC | 22 Jul 24 00:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-360389            | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC | 22 Jul 24 00:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-214905       | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-366657        | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-214905 | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:56 UTC |
	|         | default-k8s-diff-port-214905                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-945581                  | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-945581 --memory=2200                     | no-preload-945581            | jenkins | v1.33.1 | 22 Jul 24 00:45 UTC | 22 Jul 24 00:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-366657             | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC | 22 Jul 24 00:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-366657                              | old-k8s-version-366657       | jenkins | v1.33.1 | 22 Jul 24 00:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-360389                 | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-360389                                  | embed-certs-360389           | jenkins | v1.33.1 | 22 Jul 24 00:47 UTC | 22 Jul 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:47:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:47:11.399269   72069 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:47:11.399363   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399371   72069 out.go:304] Setting ErrFile to fd 2...
	I0722 00:47:11.399375   72069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:47:11.399555   72069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:47:11.400061   72069 out.go:298] Setting JSON to false
	I0722 00:47:11.400923   72069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5375,"bootTime":1721603856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:47:11.400979   72069 start.go:139] virtualization: kvm guest
	I0722 00:47:11.403149   72069 out.go:177] * [embed-certs-360389] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:47:11.404349   72069 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:47:11.404495   72069 notify.go:220] Checking for updates...
	I0722 00:47:11.406518   72069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:47:11.407497   72069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:47:11.408480   72069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:47:11.409558   72069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:47:11.410707   72069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:47:11.412181   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:47:11.412562   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.412616   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.427332   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0722 00:47:11.427714   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.428211   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.428237   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.428548   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.428722   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.428942   72069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:47:11.429213   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:47:11.429246   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:47:11.443886   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0722 00:47:11.444320   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:47:11.444722   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:47:11.444742   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:47:11.445151   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:47:11.445397   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:47:11.478487   72069 out.go:177] * Using the kvm2 driver based on existing profile
	I0722 00:47:11.479887   72069 start.go:297] selected driver: kvm2
	I0722 00:47:11.479907   72069 start.go:901] validating driver "kvm2" against &{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.480044   72069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:47:11.480938   72069 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.481002   72069 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0722 00:47:11.496636   72069 install.go:137] /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0722 00:47:11.496999   72069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:47:11.497058   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:47:11.497073   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:47:11.497113   72069 start.go:340] cluster config:
	{Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:47:11.497206   72069 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:47:11.499096   72069 out.go:177] * Starting "embed-certs-360389" primary control-plane node in "embed-certs-360389" cluster
	I0722 00:47:07.486881   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:10.558852   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:11.500360   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:47:11.500398   72069 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0722 00:47:11.500405   72069 cache.go:56] Caching tarball of preloaded images
	I0722 00:47:11.500486   72069 preload.go:172] Found /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0722 00:47:11.500496   72069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0722 00:47:11.500576   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:47:11.500747   72069 start.go:360] acquireMachinesLock for embed-certs-360389: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:47:16.638908   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:19.710843   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:25.790913   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:28.862882   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:34.942917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:38.014863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:44.094898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:47.166853   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:53.246799   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:47:56.318939   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:02.398890   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:05.470909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:11.550863   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:14.622851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:20.702859   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:23.774851   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:29.854925   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:32.926912   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:39.006904   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:42.078947   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:48.158822   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:51.230942   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:48:57.310909   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:00.382907   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:06.462849   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:09.534836   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:15.614953   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:18.686869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:24.766917   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:27.838869   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:33.918902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:36.990920   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:43.070898   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:46.142902   71227 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.97:22: connect: no route to host
	I0722 00:49:49.147421   71396 start.go:364] duration metric: took 4m20.815253945s to acquireMachinesLock for "no-preload-945581"
	I0722 00:49:49.147470   71396 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:49:49.147476   71396 fix.go:54] fixHost starting: 
	I0722 00:49:49.147819   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:49:49.147851   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:49:49.163148   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0722 00:49:49.163569   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:49:49.164005   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:49:49.164029   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:49:49.164377   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:49:49.164602   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:49:49.164775   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:49:49.166353   71396 fix.go:112] recreateIfNeeded on no-preload-945581: state=Stopped err=<nil>
	I0722 00:49:49.166384   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	W0722 00:49:49.166541   71396 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:49:49.168381   71396 out.go:177] * Restarting existing kvm2 VM for "no-preload-945581" ...
	I0722 00:49:49.144751   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:49:49.144798   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145096   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:49:49.145120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:49:49.145534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:49:49.147295   71227 machine.go:97] duration metric: took 4m37.436148538s to provisionDockerMachine
	I0722 00:49:49.147331   71227 fix.go:56] duration metric: took 4m37.456082976s for fixHost
	I0722 00:49:49.147339   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 4m37.456102125s
	W0722 00:49:49.147360   71227 start.go:714] error starting host: provision: host is not running
	W0722 00:49:49.147451   71227 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0722 00:49:49.147458   71227 start.go:729] Will try again in 5 seconds ...
	I0722 00:49:49.169523   71396 main.go:141] libmachine: (no-preload-945581) Calling .Start
	I0722 00:49:49.169693   71396 main.go:141] libmachine: (no-preload-945581) Ensuring networks are active...
	I0722 00:49:49.170456   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network default is active
	I0722 00:49:49.170784   71396 main.go:141] libmachine: (no-preload-945581) Ensuring network mk-no-preload-945581 is active
	I0722 00:49:49.171142   71396 main.go:141] libmachine: (no-preload-945581) Getting domain xml...
	I0722 00:49:49.171883   71396 main.go:141] libmachine: (no-preload-945581) Creating domain...
	I0722 00:49:50.368371   71396 main.go:141] libmachine: (no-preload-945581) Waiting to get IP...
	I0722 00:49:50.369405   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.369759   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.369834   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.369752   72639 retry.go:31] will retry after 218.067591ms: waiting for machine to come up
	I0722 00:49:50.589162   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.589629   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.589652   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.589586   72639 retry.go:31] will retry after 289.602775ms: waiting for machine to come up
	I0722 00:49:50.881135   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:50.881628   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:50.881656   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:50.881577   72639 retry.go:31] will retry after 404.102935ms: waiting for machine to come up
	I0722 00:49:51.287195   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.287613   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.287637   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.287564   72639 retry.go:31] will retry after 441.032452ms: waiting for machine to come up
	I0722 00:49:51.730393   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:51.730822   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:51.730849   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:51.730778   72639 retry.go:31] will retry after 501.742802ms: waiting for machine to come up
	I0722 00:49:52.234826   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.235242   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.235270   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.235204   72639 retry.go:31] will retry after 637.226427ms: waiting for machine to come up
	I0722 00:49:52.874034   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:52.874408   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:52.874435   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:52.874354   72639 retry.go:31] will retry after 934.415512ms: waiting for machine to come up
	I0722 00:49:54.149867   71227 start.go:360] acquireMachinesLock for default-k8s-diff-port-214905: {Name:mk6b3c50c1c221dd600e48c8652a2f77916f7114 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:49:53.810377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:53.810773   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:53.810802   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:53.810713   72639 retry.go:31] will retry after 1.086281994s: waiting for machine to come up
	I0722 00:49:54.898235   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:54.898636   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:54.898666   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:54.898620   72639 retry.go:31] will retry after 1.427705948s: waiting for machine to come up
	I0722 00:49:56.328275   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:56.328720   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:56.328753   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:56.328664   72639 retry.go:31] will retry after 1.74282346s: waiting for machine to come up
	I0722 00:49:58.073601   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:49:58.073983   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:49:58.074002   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:49:58.073937   72639 retry.go:31] will retry after 2.51361725s: waiting for machine to come up
	I0722 00:50:00.589396   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:00.589834   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:00.589868   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:00.589798   72639 retry.go:31] will retry after 2.503161132s: waiting for machine to come up
	I0722 00:50:03.094171   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:03.094475   71396 main.go:141] libmachine: (no-preload-945581) DBG | unable to find current IP address of domain no-preload-945581 in network mk-no-preload-945581
	I0722 00:50:03.094500   71396 main.go:141] libmachine: (no-preload-945581) DBG | I0722 00:50:03.094441   72639 retry.go:31] will retry after 2.749996284s: waiting for machine to come up
	I0722 00:50:07.107185   71766 start.go:364] duration metric: took 3m43.825226488s to acquireMachinesLock for "old-k8s-version-366657"
	I0722 00:50:07.107247   71766 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:07.107256   71766 fix.go:54] fixHost starting: 
	I0722 00:50:07.107639   71766 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:07.107677   71766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:07.125437   71766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0722 00:50:07.125898   71766 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:07.126410   71766 main.go:141] libmachine: Using API Version  1
	I0722 00:50:07.126432   71766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:07.126809   71766 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:07.127008   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:07.127157   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetState
	I0722 00:50:07.128854   71766 fix.go:112] recreateIfNeeded on old-k8s-version-366657: state=Stopped err=<nil>
	I0722 00:50:07.128894   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	W0722 00:50:07.129063   71766 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:07.131118   71766 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-366657" ...
	I0722 00:50:07.132293   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .Start
	I0722 00:50:07.132446   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring networks are active...
	I0722 00:50:07.133199   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network default is active
	I0722 00:50:07.133630   71766 main.go:141] libmachine: (old-k8s-version-366657) Ensuring network mk-old-k8s-version-366657 is active
	I0722 00:50:07.133979   71766 main.go:141] libmachine: (old-k8s-version-366657) Getting domain xml...
	I0722 00:50:07.134723   71766 main.go:141] libmachine: (old-k8s-version-366657) Creating domain...
	I0722 00:50:05.845660   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846044   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has current primary IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.846070   71396 main.go:141] libmachine: (no-preload-945581) Found IP for machine: 192.168.50.251
	I0722 00:50:05.846084   71396 main.go:141] libmachine: (no-preload-945581) Reserving static IP address...
	I0722 00:50:05.846475   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.846498   71396 main.go:141] libmachine: (no-preload-945581) DBG | skip adding static IP to network mk-no-preload-945581 - found existing host DHCP lease matching {name: "no-preload-945581", mac: "52:54:00:2e:d4:7d", ip: "192.168.50.251"}
	I0722 00:50:05.846516   71396 main.go:141] libmachine: (no-preload-945581) Reserved static IP address: 192.168.50.251
	I0722 00:50:05.846526   71396 main.go:141] libmachine: (no-preload-945581) DBG | Getting to WaitForSSH function...
	I0722 00:50:05.846542   71396 main.go:141] libmachine: (no-preload-945581) Waiting for SSH to be available...
	I0722 00:50:05.848751   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849100   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.849131   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.849223   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH client type: external
	I0722 00:50:05.849243   71396 main.go:141] libmachine: (no-preload-945581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa (-rw-------)
	I0722 00:50:05.849284   71396 main.go:141] libmachine: (no-preload-945581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:05.849298   71396 main.go:141] libmachine: (no-preload-945581) DBG | About to run SSH command:
	I0722 00:50:05.849328   71396 main.go:141] libmachine: (no-preload-945581) DBG | exit 0
	I0722 00:50:05.979082   71396 main.go:141] libmachine: (no-preload-945581) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:05.979510   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetConfigRaw
	I0722 00:50:05.980099   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:05.982482   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.982851   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.982887   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.983258   71396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/config.json ...
	I0722 00:50:05.983453   71396 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:05.983472   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:05.983666   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:05.985822   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986287   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:05.986314   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:05.986429   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:05.986593   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986770   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:05.986932   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:05.987075   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:05.987279   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:05.987292   71396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:06.098636   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:06.098668   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.098908   71396 buildroot.go:166] provisioning hostname "no-preload-945581"
	I0722 00:50:06.098931   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.099126   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.101842   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102178   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.102203   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.102342   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.102582   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102782   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.102927   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.103073   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.103244   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.103259   71396 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-945581 && echo "no-preload-945581" | sudo tee /etc/hostname
	I0722 00:50:06.230309   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-945581
	
	I0722 00:50:06.230343   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.233015   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233340   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.233381   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.233537   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.233713   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233867   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.233977   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.234136   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.234309   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.234331   71396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-945581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-945581/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-945581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:06.356434   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:06.356463   71396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:06.356485   71396 buildroot.go:174] setting up certificates
	I0722 00:50:06.356494   71396 provision.go:84] configureAuth start
	I0722 00:50:06.356503   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetMachineName
	I0722 00:50:06.356757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:06.359304   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359681   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.359705   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.359830   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.362024   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362342   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.362369   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.362493   71396 provision.go:143] copyHostCerts
	I0722 00:50:06.362548   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:06.362560   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:06.362644   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:06.362747   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:06.362755   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:06.362781   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:06.362837   71396 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:06.362846   71396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:06.362875   71396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:06.362919   71396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.no-preload-945581 san=[127.0.0.1 192.168.50.251 localhost minikube no-preload-945581]
	I0722 00:50:06.430154   71396 provision.go:177] copyRemoteCerts
	I0722 00:50:06.430208   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:06.430232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.432910   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433234   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.433262   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.433421   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.433610   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.433757   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.433892   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.521709   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:06.545504   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 00:50:06.567911   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:06.591057   71396 provision.go:87] duration metric: took 234.553134ms to configureAuth
	I0722 00:50:06.591082   71396 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:06.591261   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:50:06.591338   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.593970   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594295   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.594323   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.594484   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.594690   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.594856   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.595003   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.595211   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.595378   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.595395   71396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:06.863536   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:06.863564   71396 machine.go:97] duration metric: took 880.097281ms to provisionDockerMachine
	I0722 00:50:06.863579   71396 start.go:293] postStartSetup for "no-preload-945581" (driver="kvm2")
	I0722 00:50:06.863595   71396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:06.863621   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:06.863943   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:06.863968   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.866696   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867085   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.867121   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.867280   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.867474   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.867693   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.867855   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:06.953728   71396 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:06.958026   71396 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:06.958060   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:06.958160   71396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:06.958245   71396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:06.958381   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:06.967446   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:06.988827   71396 start.go:296] duration metric: took 125.232772ms for postStartSetup
	I0722 00:50:06.988870   71396 fix.go:56] duration metric: took 17.841392885s for fixHost
	I0722 00:50:06.988892   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:06.992032   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992480   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:06.992514   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:06.992710   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:06.992912   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993054   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:06.993182   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:06.993341   71396 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:06.993521   71396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0722 00:50:06.993534   71396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:07.107008   71396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609407.082052746
	
	I0722 00:50:07.107039   71396 fix.go:216] guest clock: 1721609407.082052746
	I0722 00:50:07.107046   71396 fix.go:229] Guest: 2024-07-22 00:50:07.082052746 +0000 UTC Remote: 2024-07-22 00:50:06.988874638 +0000 UTC m=+278.790790533 (delta=93.178108ms)
	I0722 00:50:07.107078   71396 fix.go:200] guest clock delta is within tolerance: 93.178108ms
	I0722 00:50:07.107090   71396 start.go:83] releasing machines lock for "no-preload-945581", held for 17.959634307s
	I0722 00:50:07.107122   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.107382   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:07.110150   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110556   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.110585   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.110772   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111357   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111554   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:50:07.111630   71396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:07.111677   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.111941   71396 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:07.111964   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:50:07.114386   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114771   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.114818   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114841   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.114896   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115124   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115309   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.115362   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:07.115387   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:07.115477   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.115586   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:50:07.115729   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:50:07.115921   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:50:07.116058   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:50:07.225608   71396 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:07.231399   71396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:07.377396   71396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:07.383388   71396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:07.383467   71396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:07.405663   71396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:07.405690   71396 start.go:495] detecting cgroup driver to use...
	I0722 00:50:07.405793   71396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:07.422118   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:07.437199   71396 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:07.437255   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:07.452248   71396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:07.466256   71396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:07.588726   71396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:07.729394   71396 docker.go:233] disabling docker service ...
	I0722 00:50:07.729456   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:07.743384   71396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:07.756095   71396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:07.906645   71396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:08.041579   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:08.054863   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:08.073114   71396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0722 00:50:08.073172   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.084226   71396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:08.084301   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.094581   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.105603   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.115685   71396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:08.126499   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.137018   71396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.154480   71396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:08.164668   71396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:08.174305   71396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:08.174359   71396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:08.186456   71396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:08.194821   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:08.320687   71396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:08.465373   71396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:08.465448   71396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:08.470485   71396 start.go:563] Will wait 60s for crictl version
	I0722 00:50:08.470544   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:08.474072   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:08.513114   71396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:08.513216   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.539930   71396 ssh_runner.go:195] Run: crio --version
	I0722 00:50:08.567620   71396 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0722 00:50:08.382060   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting to get IP...
	I0722 00:50:08.383320   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.383745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.383811   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.383715   72776 retry.go:31] will retry after 263.644609ms: waiting for machine to come up
	I0722 00:50:08.649257   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.649809   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.649830   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.649778   72776 retry.go:31] will retry after 324.085853ms: waiting for machine to come up
	I0722 00:50:08.975328   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:08.975773   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:08.975804   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:08.975732   72776 retry.go:31] will retry after 301.332395ms: waiting for machine to come up
	I0722 00:50:09.278150   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.278576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.278618   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.278522   72776 retry.go:31] will retry after 439.529948ms: waiting for machine to come up
	I0722 00:50:09.720181   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:09.720739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:09.720765   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:09.720698   72776 retry.go:31] will retry after 552.013475ms: waiting for machine to come up
	I0722 00:50:10.274671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:10.275089   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:10.275121   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:10.275025   72776 retry.go:31] will retry after 907.37255ms: waiting for machine to come up
	I0722 00:50:11.183963   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:11.184540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:11.184576   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:11.184478   72776 retry.go:31] will retry after 1.051281586s: waiting for machine to come up
	I0722 00:50:12.237292   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:12.237722   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:12.237766   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:12.237695   72776 retry.go:31] will retry after 1.060332947s: waiting for machine to come up
	I0722 00:50:08.568752   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetIP
	I0722 00:50:08.571616   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572030   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:50:08.572059   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:50:08.572256   71396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:08.576341   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:08.587890   71396 kubeadm.go:883] updating cluster {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:08.588024   71396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0722 00:50:08.588089   71396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:08.621425   71396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0722 00:50:08.621453   71396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:08.621515   71396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.621539   71396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.621554   71396 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.621559   71396 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.621620   71396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.621681   71396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.621676   71396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.621693   71396 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623311   71396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.623330   71396 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.623340   71396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:08.623453   71396 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 00:50:08.623460   71396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.623481   71396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.623458   71396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.623524   71396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.837478   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.839188   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:08.839207   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0722 00:50:08.860882   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:08.862992   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:08.865426   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:08.879674   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:08.909568   71396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0722 00:50:08.909644   71396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:08.909705   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110293   71396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0722 00:50:09.110339   71396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.110362   71396 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0722 00:50:09.110392   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110395   71396 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.110413   71396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0722 00:50:09.110435   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110439   71396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.110466   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110500   71396 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0722 00:50:09.110529   71396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0722 00:50:09.110531   71396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.110549   71396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.110571   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110586   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:09.110625   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 00:50:09.149087   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 00:50:09.149139   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149182   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0722 00:50:09.149223   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 00:50:09.149230   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.149292   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 00:50:09.149320   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0722 00:50:09.238698   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238764   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 00:50:09.238804   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:09.238823   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238871   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:09.238892   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0722 00:50:09.238903   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:09.238906   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.238949   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0722 00:50:09.257848   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0722 00:50:09.257949   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:09.257970   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.258044   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:09.463757   71396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.738839   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.499865107s)
	I0722 00:50:11.738859   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.499932773s)
	I0722 00:50:11.738871   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0722 00:50:11.738890   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0722 00:50:11.738896   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738902   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.500006368s)
	I0722 00:50:11.738926   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0722 00:50:11.738954   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0722 00:50:11.738981   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.500138592s)
	I0722 00:50:11.739009   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739074   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.481015482s)
	I0722 00:50:11.739091   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.481127759s)
	I0722 00:50:11.739096   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0722 00:50:11.739104   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0722 00:50:11.739125   71396 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.27534053s)
	I0722 00:50:11.739156   71396 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0722 00:50:11.739186   71396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:11.739228   71396 ssh_runner.go:195] Run: which crictl
	I0722 00:50:13.299855   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:13.300350   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:13.300381   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:13.300289   72776 retry.go:31] will retry after 1.626502795s: waiting for machine to come up
	I0722 00:50:14.929188   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:14.929552   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:14.929575   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:14.929503   72776 retry.go:31] will retry after 1.83887111s: waiting for machine to come up
	I0722 00:50:16.770361   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:16.770802   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:16.770821   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:16.770762   72776 retry.go:31] will retry after 2.152025401s: waiting for machine to come up
	I0722 00:50:13.289749   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.550767023s)
	I0722 00:50:13.289785   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0722 00:50:13.289810   71396 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.289808   71396 ssh_runner.go:235] Completed: which crictl: (1.550553252s)
	I0722 00:50:13.289869   71396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:13.289870   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0722 00:50:13.323493   71396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 00:50:13.323622   71396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:15.173140   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.883165124s)
	I0722 00:50:15.173176   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0722 00:50:15.173188   71396 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.849542141s)
	I0722 00:50:15.173210   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173289   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0722 00:50:15.173215   71396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0722 00:50:16.526302   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.35298439s)
	I0722 00:50:16.526332   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0722 00:50:16.526367   71396 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:16.526439   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0722 00:50:18.925614   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:18.926062   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:18.926093   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:18.925961   72776 retry.go:31] will retry after 2.43886352s: waiting for machine to come up
	I0722 00:50:21.367523   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:21.368022   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | unable to find current IP address of domain old-k8s-version-366657 in network mk-old-k8s-version-366657
	I0722 00:50:21.368067   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | I0722 00:50:21.367966   72776 retry.go:31] will retry after 3.225328957s: waiting for machine to come up
	I0722 00:50:18.492520   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966052506s)
	I0722 00:50:18.492558   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0722 00:50:18.492594   71396 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:18.492657   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0722 00:50:21.667629   71396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.174944821s)
	I0722 00:50:21.667663   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0722 00:50:21.667690   71396 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:21.667749   71396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0722 00:50:22.310830   71396 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 00:50:22.310879   71396 cache_images.go:123] Successfully loaded all cached images
	I0722 00:50:22.310885   71396 cache_images.go:92] duration metric: took 13.689420175s to LoadCachedImages
	I0722 00:50:22.310897   71396 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.31.0-beta.0 crio true true} ...
	I0722 00:50:22.311039   71396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-945581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:22.311105   71396 ssh_runner.go:195] Run: crio config
	I0722 00:50:22.355530   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:22.355554   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:22.355574   71396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:22.355593   71396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-945581 NodeName:no-preload-945581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:22.355719   71396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-945581"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:22.355778   71396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 00:50:22.365510   71396 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:22.365569   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:22.374323   71396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0722 00:50:22.391093   71396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 00:50:22.407199   71396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0722 00:50:22.423997   71396 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:22.427616   71396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:22.438984   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:22.547979   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:22.567666   71396 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581 for IP: 192.168.50.251
	I0722 00:50:22.567685   71396 certs.go:194] generating shared ca certs ...
	I0722 00:50:22.567699   71396 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:22.567850   71396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:22.567926   71396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:22.567940   71396 certs.go:256] generating profile certs ...
	I0722 00:50:22.568028   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/client.key
	I0722 00:50:22.568103   71396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key.32cf5d69
	I0722 00:50:22.568166   71396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key
	I0722 00:50:22.568309   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:22.568350   71396 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:22.568360   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:22.568395   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:22.568432   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:22.568462   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:22.568515   71396 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:22.569143   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:22.603737   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:22.632790   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:22.672896   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:22.703801   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:50:22.735886   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:22.761318   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:22.782796   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/no-preload-945581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 00:50:22.803928   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:22.824776   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:22.845400   71396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:22.866246   71396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:22.881270   71396 ssh_runner.go:195] Run: openssl version
	I0722 00:50:22.886595   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:22.896355   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900295   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.900337   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:22.905735   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:22.915880   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:22.925699   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929674   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.929712   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:22.934881   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:22.944568   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:22.954512   71396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958431   71396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.958470   71396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:22.963541   71396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:22.973155   71396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:22.977158   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:22.982898   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:22.988510   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:22.994350   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:22.999830   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:23.005474   71396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:23.010751   71396 kubeadm.go:392] StartCluster: {Name:no-preload-945581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-945581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:23.010855   71396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:23.010900   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.049259   71396 cri.go:89] found id: ""
	I0722 00:50:23.049334   71396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:23.059034   71396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:23.059054   71396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:23.059109   71396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:23.069861   71396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:23.070759   71396 kubeconfig.go:125] found "no-preload-945581" server: "https://192.168.50.251:8443"
	I0722 00:50:23.072739   71396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:23.082872   71396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.251
	I0722 00:50:23.082905   71396 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:23.082916   71396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:23.082960   71396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:23.121857   71396 cri.go:89] found id: ""
	I0722 00:50:23.121928   71396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:23.141155   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:23.151969   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:23.152008   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:23.152054   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:23.162251   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:23.162312   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:23.172556   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:23.182949   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:23.183011   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:23.191717   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.201670   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:23.201729   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:23.212735   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:23.223179   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:23.223228   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:26.023334   72069 start.go:364] duration metric: took 3m14.522554925s to acquireMachinesLock for "embed-certs-360389"
	I0722 00:50:26.023432   72069 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:26.023441   72069 fix.go:54] fixHost starting: 
	I0722 00:50:26.023859   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:26.023896   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:26.044180   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0722 00:50:26.044615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:26.045191   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:50:26.045213   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:26.045578   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:26.045777   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:26.045944   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:50:26.047413   72069 fix.go:112] recreateIfNeeded on embed-certs-360389: state=Stopped err=<nil>
	I0722 00:50:26.047439   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	W0722 00:50:26.047584   72069 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:26.049449   72069 out.go:177] * Restarting existing kvm2 VM for "embed-certs-360389" ...
	I0722 00:50:26.050756   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Start
	I0722 00:50:26.050950   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring networks are active...
	I0722 00:50:26.051718   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network default is active
	I0722 00:50:26.052129   72069 main.go:141] libmachine: (embed-certs-360389) Ensuring network mk-embed-certs-360389 is active
	I0722 00:50:26.052586   72069 main.go:141] libmachine: (embed-certs-360389) Getting domain xml...
	I0722 00:50:26.053323   72069 main.go:141] libmachine: (embed-certs-360389) Creating domain...
	I0722 00:50:24.595842   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596249   71766 main.go:141] libmachine: (old-k8s-version-366657) Found IP for machine: 192.168.39.174
	I0722 00:50:24.596271   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has current primary IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.596277   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserving static IP address...
	I0722 00:50:24.596686   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.596711   71766 main.go:141] libmachine: (old-k8s-version-366657) Reserved static IP address: 192.168.39.174
	I0722 00:50:24.596725   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | skip adding static IP to network mk-old-k8s-version-366657 - found existing host DHCP lease matching {name: "old-k8s-version-366657", mac: "52:54:00:1a:f7:37", ip: "192.168.39.174"}
	I0722 00:50:24.596739   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Getting to WaitForSSH function...
	I0722 00:50:24.596752   71766 main.go:141] libmachine: (old-k8s-version-366657) Waiting for SSH to be available...
	I0722 00:50:24.598909   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599310   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.599343   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.599445   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH client type: external
	I0722 00:50:24.599463   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa (-rw-------)
	I0722 00:50:24.599540   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:24.599565   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | About to run SSH command:
	I0722 00:50:24.599578   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | exit 0
	I0722 00:50:24.726437   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:24.726823   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetConfigRaw
	I0722 00:50:24.727457   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:24.729852   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730193   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.730214   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.730487   71766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/config.json ...
	I0722 00:50:24.730709   71766 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:24.730735   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:24.730958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.733440   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.733822   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.733853   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.734009   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.734194   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734382   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.734540   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.734737   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.734925   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.734939   71766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:24.855189   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:24.855224   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855496   71766 buildroot.go:166] provisioning hostname "old-k8s-version-366657"
	I0722 00:50:24.855526   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:24.855731   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.858417   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858800   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.858836   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.858958   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.859147   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859316   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:24.859476   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:24.859680   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:24.859858   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:24.859874   71766 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-366657 && echo "old-k8s-version-366657" | sudo tee /etc/hostname
	I0722 00:50:24.995945   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-366657
	
	I0722 00:50:24.995967   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:24.998957   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:24.999380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:24.999761   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:24.999965   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000153   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.000305   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.000486   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.000688   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.000706   71766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-366657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-366657/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-366657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:25.127868   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:25.127895   71766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:25.127918   71766 buildroot.go:174] setting up certificates
	I0722 00:50:25.127929   71766 provision.go:84] configureAuth start
	I0722 00:50:25.127939   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetMachineName
	I0722 00:50:25.128254   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:25.130925   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.131332   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.131433   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.133762   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134049   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.134082   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.134243   71766 provision.go:143] copyHostCerts
	I0722 00:50:25.134306   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:25.134315   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:25.134379   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:25.134476   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:25.134484   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:25.134504   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:25.134560   71766 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:25.134566   71766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:25.134584   71766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:25.134670   71766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-366657 san=[127.0.0.1 192.168.39.174 localhost minikube old-k8s-version-366657]
	I0722 00:50:25.341044   71766 provision.go:177] copyRemoteCerts
	I0722 00:50:25.341102   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:25.341134   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.343943   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344346   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.344380   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.344558   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.344786   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.344963   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.345146   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.432495   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:50:25.460500   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:25.484593   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0722 00:50:25.506448   71766 provision.go:87] duration metric: took 378.504779ms to configureAuth
	I0722 00:50:25.506482   71766 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:25.506746   71766 config.go:182] Loaded profile config "old-k8s-version-366657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:50:25.506830   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.509293   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509642   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.509671   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.509796   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.510015   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510238   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.510400   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.510595   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.510796   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.510825   71766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:25.778434   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:25.778466   71766 machine.go:97] duration metric: took 1.047739425s to provisionDockerMachine
	I0722 00:50:25.778482   71766 start.go:293] postStartSetup for "old-k8s-version-366657" (driver="kvm2")
	I0722 00:50:25.778503   71766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:25.778546   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:25.778895   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:25.778921   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.781347   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781683   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.781710   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.781821   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.782003   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.782154   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.782306   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:25.868614   71766 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:25.872668   71766 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:25.872698   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:25.872779   71766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:25.872862   71766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:25.872949   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:25.881498   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:25.903060   71766 start.go:296] duration metric: took 124.542869ms for postStartSetup
	I0722 00:50:25.903101   71766 fix.go:56] duration metric: took 18.795843981s for fixHost
	I0722 00:50:25.903124   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:25.905945   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906318   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:25.906348   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:25.906507   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:25.906711   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.906872   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:25.907064   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:25.907248   71766 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:25.907468   71766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0722 00:50:25.907482   71766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:26.023173   71766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609425.999209033
	
	I0722 00:50:26.023195   71766 fix.go:216] guest clock: 1721609425.999209033
	I0722 00:50:26.023205   71766 fix.go:229] Guest: 2024-07-22 00:50:25.999209033 +0000 UTC Remote: 2024-07-22 00:50:25.903106071 +0000 UTC m=+242.757546468 (delta=96.102962ms)
	I0722 00:50:26.023244   71766 fix.go:200] guest clock delta is within tolerance: 96.102962ms
	I0722 00:50:26.023251   71766 start.go:83] releasing machines lock for "old-k8s-version-366657", held for 18.916030347s
	I0722 00:50:26.023280   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.023587   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:26.026482   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.026906   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.026948   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.027100   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027590   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027748   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .DriverName
	I0722 00:50:26.027821   71766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:26.027868   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.028034   71766 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:26.028054   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHHostname
	I0722 00:50:26.030621   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.030898   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031030   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031051   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031235   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:26.031295   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:26.031325   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031425   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHPort
	I0722 00:50:26.031506   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031564   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHKeyPath
	I0722 00:50:26.031667   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031724   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetSSHUsername
	I0722 00:50:26.031776   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.031844   71766 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/old-k8s-version-366657/id_rsa Username:docker}
	I0722 00:50:26.143565   71766 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:26.151224   71766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:26.305365   71766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:26.312425   71766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:26.312503   71766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:26.328772   71766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:26.328802   71766 start.go:495] detecting cgroup driver to use...
	I0722 00:50:26.328885   71766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:26.350903   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:26.364746   71766 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:26.364815   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:26.380440   71766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:26.396057   71766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:26.533254   71766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:26.677706   71766 docker.go:233] disabling docker service ...
	I0722 00:50:26.677783   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:26.695364   71766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:26.711391   71766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:26.866276   71766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:27.017177   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:27.032836   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:27.053770   71766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0722 00:50:27.053832   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.066654   71766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:27.066741   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.080820   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.091522   71766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:27.102409   71766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:27.120168   71766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:27.136258   71766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:27.136317   71766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:27.152736   71766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:27.163232   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:27.299054   71766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:27.442092   71766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:27.442176   71766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:27.446778   71766 start.go:563] Will wait 60s for crictl version
	I0722 00:50:27.446848   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:27.451014   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:27.497326   71766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:27.497421   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.525377   71766 ssh_runner.go:195] Run: crio --version
	I0722 00:50:27.556102   71766 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0722 00:50:27.557374   71766 main.go:141] libmachine: (old-k8s-version-366657) Calling .GetIP
	I0722 00:50:27.560745   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561148   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f7:37", ip: ""} in network mk-old-k8s-version-366657: {Iface:virbr1 ExpiryTime:2024-07-22 01:40:50 +0000 UTC Type:0 Mac:52:54:00:1a:f7:37 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:old-k8s-version-366657 Clientid:01:52:54:00:1a:f7:37}
	I0722 00:50:27.561185   71766 main.go:141] libmachine: (old-k8s-version-366657) DBG | domain old-k8s-version-366657 has defined IP address 192.168.39.174 and MAC address 52:54:00:1a:f7:37 in network mk-old-k8s-version-366657
	I0722 00:50:27.561398   71766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:27.565272   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:27.578334   71766 kubeadm.go:883] updating cluster {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:27.578480   71766 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0722 00:50:27.578548   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:27.640111   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:27.640188   71766 ssh_runner.go:195] Run: which lz4
	I0722 00:50:27.644052   71766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:27.648244   71766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:27.648275   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0722 00:50:23.231803   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:23.240990   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.342544   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:23.953879   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.147978   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.219220   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:24.326196   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:24.326271   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:24.826734   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.327217   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:25.367904   71396 api_server.go:72] duration metric: took 1.041704474s to wait for apiserver process to appear ...
	I0722 00:50:25.367938   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:25.367965   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.485350   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:28.485385   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:28.485403   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.747483   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.747518   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:28.868817   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:28.880513   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:28.880550   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.368530   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.383715   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:29.383760   71396 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:29.868120   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:50:29.877138   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:50:29.887974   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:50:29.888074   71396 api_server.go:131] duration metric: took 4.520127124s to wait for apiserver health ...
	I0722 00:50:29.888102   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:50:29.888136   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:29.890064   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:27.372853   72069 main.go:141] libmachine: (embed-certs-360389) Waiting to get IP...
	I0722 00:50:27.373957   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.374555   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.374676   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.374530   72949 retry.go:31] will retry after 296.485282ms: waiting for machine to come up
	I0722 00:50:27.673086   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.673592   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.673631   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.673519   72949 retry.go:31] will retry after 310.216849ms: waiting for machine to come up
	I0722 00:50:27.985049   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:27.985471   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:27.985503   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:27.985429   72949 retry.go:31] will retry after 414.762643ms: waiting for machine to come up
	I0722 00:50:28.402452   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.403013   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.403038   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.402956   72949 retry.go:31] will retry after 583.417858ms: waiting for machine to come up
	I0722 00:50:28.987836   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:28.988271   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:28.988302   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:28.988230   72949 retry.go:31] will retry after 669.885759ms: waiting for machine to come up
	I0722 00:50:29.660483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:29.660990   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:29.661017   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:29.660954   72949 retry.go:31] will retry after 572.748153ms: waiting for machine to come up
	I0722 00:50:30.235928   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:30.236421   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:30.236444   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:30.236370   72949 retry.go:31] will retry after 1.075901365s: waiting for machine to come up
	I0722 00:50:31.313783   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:31.314294   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:31.314327   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:31.314235   72949 retry.go:31] will retry after 1.321638517s: waiting for machine to come up
	I0722 00:50:29.185503   71766 crio.go:462] duration metric: took 1.541485996s to copy over tarball
	I0722 00:50:29.185577   71766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:32.307529   71766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.121924371s)
	I0722 00:50:32.307563   71766 crio.go:469] duration metric: took 3.122035524s to extract the tarball
	I0722 00:50:32.307571   71766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:32.349540   71766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:32.389391   71766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0722 00:50:32.389413   71766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 00:50:32.389483   71766 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.389684   71766 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0722 00:50:32.389705   71766 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.389523   71766 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.389529   71766 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.389550   71766 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.389481   71766 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.389610   71766 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:32.391618   71766 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.391668   71766 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.391699   71766 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.391604   71766 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.391738   71766 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.391885   71766 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.392040   71766 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0722 00:50:32.595306   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0722 00:50:32.617406   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.620734   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.632126   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.633087   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.634908   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.639522   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.654724   71766 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0722 00:50:32.654767   71766 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0722 00:50:32.654811   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.711734   71766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0722 00:50:32.711784   71766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.711835   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782814   71766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0722 00:50:32.782859   71766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.782907   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.782974   71766 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0722 00:50:32.783020   71766 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0722 00:50:32.783055   71766 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.783054   71766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0722 00:50:32.783021   71766 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.783075   71766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.783095   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783102   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.783105   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.793888   71766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0722 00:50:32.793905   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0722 00:50:32.793940   71766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.793957   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0722 00:50:32.793979   71766 ssh_runner.go:195] Run: which crictl
	I0722 00:50:32.794024   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0722 00:50:32.794054   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0722 00:50:32.794081   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0722 00:50:32.794100   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0722 00:50:32.797621   71766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0722 00:50:32.914793   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0722 00:50:32.914817   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0722 00:50:32.945927   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0722 00:50:32.945982   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0722 00:50:32.946031   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0722 00:50:32.946044   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0722 00:50:32.947128   71766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0722 00:50:29.891411   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:29.907786   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:29.947859   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:50:29.967814   71396 system_pods.go:59] 8 kube-system pods found
	I0722 00:50:29.967874   71396 system_pods.go:61] "coredns-5cfdc65f69-sfd4h" [4c9f9837-0cbf-40c7-9e39-37550d9cc463] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:50:29.967887   71396 system_pods.go:61] "etcd-no-preload-945581" [275e5406-c784-4e4e-b591-f01c4deafe6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:50:29.967915   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [ca2bfe5e-9fc9-49ee-9e19-b01a5747fbe4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:50:29.967928   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [c6866588-c2e0-4b55-923b-086441e8197d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:50:29.967938   71396 system_pods.go:61] "kube-proxy-f5ttf" [d5814989-002e-46af-b0e4-aa6e2dd622f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:50:29.967951   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [27fbb188-34cd-491f-9fe3-ea995abec8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:50:29.967960   71396 system_pods.go:61] "metrics-server-78fcd8795b-k5q49" [3952712a-f35a-43e3-9bb5-54cd952e6ffb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:50:29.967972   71396 system_pods.go:61] "storage-provisioner" [4b750430-8af4-40c6-8e67-74f8f991f756] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:50:29.967993   71396 system_pods.go:74] duration metric: took 20.109811ms to wait for pod list to return data ...
	I0722 00:50:29.968005   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:50:29.975885   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:50:29.975930   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:50:29.975945   71396 node_conditions.go:105] duration metric: took 7.933593ms to run NodePressure ...
	I0722 00:50:29.975981   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:30.350758   71396 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355870   71396 kubeadm.go:739] kubelet initialised
	I0722 00:50:30.355901   71396 kubeadm.go:740] duration metric: took 5.057878ms waiting for restarted kubelet to initialise ...
	I0722 00:50:30.355911   71396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:50:30.361313   71396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.366039   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366064   71396 pod_ready.go:81] duration metric: took 4.712717ms for pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.366075   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "coredns-5cfdc65f69-sfd4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.366086   71396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.370566   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370590   71396 pod_ready.go:81] duration metric: took 4.494737ms for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.370610   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "etcd-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.370618   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.374679   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374703   71396 pod_ready.go:81] duration metric: took 4.07802ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.374711   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-apiserver-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.374716   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.388749   71396 pod_ready.go:97] node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388779   71396 pod_ready.go:81] duration metric: took 14.053875ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	E0722 00:50:30.388790   71396 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-945581" hosting pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-945581" has status "Ready":"False"
	I0722 00:50:30.388799   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755551   71396 pod_ready.go:92] pod "kube-proxy-f5ttf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:30.755575   71396 pod_ready.go:81] duration metric: took 366.766187ms for pod "kube-proxy-f5ttf" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:30.755586   71396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:32.637857   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:32.638275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:32.638310   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:32.638228   72949 retry.go:31] will retry after 1.712692655s: waiting for machine to come up
	I0722 00:50:34.352650   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:34.353119   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:34.353145   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:34.353073   72949 retry.go:31] will retry after 1.484222747s: waiting for machine to come up
	I0722 00:50:35.838641   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:35.839201   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:35.839222   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:35.839183   72949 retry.go:31] will retry after 2.627126132s: waiting for machine to come up
	I0722 00:50:33.326051   71766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:50:33.472864   71766 cache_images.go:92] duration metric: took 1.083433696s to LoadCachedImages
	W0722 00:50:33.472967   71766 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-5094/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0722 00:50:33.472986   71766 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0722 00:50:33.473129   71766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-366657 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:33.473228   71766 ssh_runner.go:195] Run: crio config
	I0722 00:50:33.531376   71766 cni.go:84] Creating CNI manager for ""
	I0722 00:50:33.531396   71766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:33.531404   71766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:33.531422   71766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-366657 NodeName:old-k8s-version-366657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0722 00:50:33.531550   71766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-366657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:33.531614   71766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0722 00:50:33.541419   71766 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:33.541491   71766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:33.550703   71766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0722 00:50:33.566269   71766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:33.581854   71766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0722 00:50:33.599717   71766 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:33.603361   71766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:33.615376   71766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:33.747842   71766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:33.767272   71766 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657 for IP: 192.168.39.174
	I0722 00:50:33.767296   71766 certs.go:194] generating shared ca certs ...
	I0722 00:50:33.767314   71766 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:33.767466   71766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:33.767533   71766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:33.767548   71766 certs.go:256] generating profile certs ...
	I0722 00:50:33.767663   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/client.key
	I0722 00:50:33.767779   71766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key.2cc8579c
	I0722 00:50:33.767843   71766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key
	I0722 00:50:33.767981   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:33.768014   71766 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:33.768028   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:33.768059   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:33.768086   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:33.768119   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:33.768177   71766 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:33.768796   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:33.805013   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:33.842273   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:33.871657   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:33.905885   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 00:50:33.945447   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:33.987191   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:34.017838   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/old-k8s-version-366657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:34.061776   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:34.084160   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:34.106490   71766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:34.131694   71766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:34.150208   71766 ssh_runner.go:195] Run: openssl version
	I0722 00:50:34.155648   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:34.165650   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.169948   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.170005   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:34.175496   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:34.185435   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:34.195356   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199499   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.199562   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:34.204876   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:34.214676   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:34.224926   71766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.228954   71766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.229009   71766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:34.234309   71766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:34.244747   71766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:34.249101   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:34.255085   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:34.261042   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:34.267212   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:34.272706   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:34.278093   71766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:34.283797   71766 kubeadm.go:392] StartCluster: {Name:old-k8s-version-366657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-366657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:34.283874   71766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:34.283959   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.319527   71766 cri.go:89] found id: ""
	I0722 00:50:34.319610   71766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:34.330625   71766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:34.330648   71766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:34.330712   71766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:34.340738   71766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:34.341687   71766 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-366657" does not appear in /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:50:34.342243   71766 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-5094/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-366657" cluster setting kubeconfig missing "old-k8s-version-366657" context setting]
	I0722 00:50:34.343137   71766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:34.379042   71766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:34.389633   71766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I0722 00:50:34.389675   71766 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:34.389687   71766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:34.389747   71766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:34.429677   71766 cri.go:89] found id: ""
	I0722 00:50:34.429752   71766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:34.449498   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:34.460132   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:34.460153   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:34.460209   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:34.469946   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:34.470012   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:34.479577   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:34.488085   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:34.488143   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:34.497434   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.508955   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:34.509024   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:34.522160   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:34.530889   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:34.530955   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:34.539988   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:34.549389   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:34.678721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.510276   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.746079   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.876163   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:35.960112   71766 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:35.960227   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.460694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:36.960409   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.460334   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:37.961142   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:33.328730   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:35.764692   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:38.467549   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:38.467949   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:38.467979   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:38.467900   72949 retry.go:31] will retry after 3.474632615s: waiting for machine to come up
	I0722 00:50:38.460660   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.960541   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.460519   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:39.960698   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.460424   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:40.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.460633   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:41.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.461093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:42.961222   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:38.262645   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:40.765815   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:41.943628   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:41.944065   72069 main.go:141] libmachine: (embed-certs-360389) DBG | unable to find current IP address of domain embed-certs-360389 in network mk-embed-certs-360389
	I0722 00:50:41.944098   72069 main.go:141] libmachine: (embed-certs-360389) DBG | I0722 00:50:41.944020   72949 retry.go:31] will retry after 3.789965437s: waiting for machine to come up
	I0722 00:50:45.737995   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.738549   72069 main.go:141] libmachine: (embed-certs-360389) Found IP for machine: 192.168.72.32
	I0722 00:50:45.738585   72069 main.go:141] libmachine: (embed-certs-360389) Reserving static IP address...
	I0722 00:50:45.738600   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has current primary IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.739194   72069 main.go:141] libmachine: (embed-certs-360389) Reserved static IP address: 192.168.72.32
	I0722 00:50:45.739221   72069 main.go:141] libmachine: (embed-certs-360389) Waiting for SSH to be available...
	I0722 00:50:45.739246   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.739273   72069 main.go:141] libmachine: (embed-certs-360389) DBG | skip adding static IP to network mk-embed-certs-360389 - found existing host DHCP lease matching {name: "embed-certs-360389", mac: "52:54:00:bc:4e:22", ip: "192.168.72.32"}
	I0722 00:50:45.739290   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Getting to WaitForSSH function...
	I0722 00:50:45.741483   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741865   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.741886   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.741986   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH client type: external
	I0722 00:50:45.742006   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa (-rw-------)
	I0722 00:50:45.742044   72069 main.go:141] libmachine: (embed-certs-360389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:50:45.742057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | About to run SSH command:
	I0722 00:50:45.742069   72069 main.go:141] libmachine: (embed-certs-360389) DBG | exit 0
	I0722 00:50:45.866697   72069 main.go:141] libmachine: (embed-certs-360389) DBG | SSH cmd err, output: <nil>: 
	I0722 00:50:45.867052   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetConfigRaw
	I0722 00:50:45.867691   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:45.870275   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870660   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.870689   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.870906   72069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/config.json ...
	I0722 00:50:45.871083   72069 machine.go:94] provisionDockerMachine start ...
	I0722 00:50:45.871099   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:45.871366   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.873526   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873849   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.873875   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.873989   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.874160   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874305   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.874441   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.874630   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.874816   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.874828   72069 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:50:45.978653   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:50:45.978681   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.978911   72069 buildroot.go:166] provisioning hostname "embed-certs-360389"
	I0722 00:50:45.978938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:45.979106   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:45.981737   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982224   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:45.982258   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:45.982527   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:45.982746   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.982938   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:45.983070   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:45.983247   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:45.983409   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:45.983421   72069 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-360389 && echo "embed-certs-360389" | sudo tee /etc/hostname
	I0722 00:50:46.099906   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-360389
	
	I0722 00:50:46.099939   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.102524   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.102868   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.102898   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.103089   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.103320   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103505   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.103652   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.103856   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.104085   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.104113   72069 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-360389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-360389/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-360389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:50:46.214705   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:50:46.214733   72069 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:50:46.214750   72069 buildroot.go:174] setting up certificates
	I0722 00:50:46.214760   72069 provision.go:84] configureAuth start
	I0722 00:50:46.214768   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetMachineName
	I0722 00:50:46.215055   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:46.217389   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217767   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.217811   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.217929   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.219965   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.220288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.220369   72069 provision.go:143] copyHostCerts
	I0722 00:50:46.220437   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:50:46.220454   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:50:46.220518   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:50:46.220636   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:50:46.220647   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:50:46.220677   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:50:46.220751   72069 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:50:46.220762   72069 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:50:46.220787   72069 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:50:46.220850   72069 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.embed-certs-360389 san=[127.0.0.1 192.168.72.32 embed-certs-360389 localhost minikube]
	I0722 00:50:46.370125   72069 provision.go:177] copyRemoteCerts
	I0722 00:50:46.370178   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:50:46.370202   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.372909   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373234   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.373266   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.373448   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.373629   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.373778   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.373905   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.023130   71227 start.go:364] duration metric: took 52.873221478s to acquireMachinesLock for "default-k8s-diff-port-214905"
	I0722 00:50:47.023182   71227 start.go:96] Skipping create...Using existing machine configuration
	I0722 00:50:47.023192   71227 fix.go:54] fixHost starting: 
	I0722 00:50:47.023547   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:50:47.023575   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:50:47.041199   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0722 00:50:47.041643   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:50:47.042130   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:50:47.042154   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:50:47.042531   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:50:47.042751   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:50:47.042923   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:50:47.044505   71227 fix.go:112] recreateIfNeeded on default-k8s-diff-port-214905: state=Stopped err=<nil>
	I0722 00:50:47.044532   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	W0722 00:50:47.044693   71227 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 00:50:47.046628   71227 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-214905" ...
	I0722 00:50:43.460446   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.960706   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.460586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:44.960579   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.460573   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:45.961273   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.461155   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:46.961024   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.460530   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:47.960457   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:43.261879   71396 pod_ready.go:102] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:44.760665   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:50:44.760686   71396 pod_ready.go:81] duration metric: took 14.005092247s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:44.760696   71396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:46.766941   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:46.456883   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:50:46.484904   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 00:50:46.507447   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:50:46.531368   72069 provision.go:87] duration metric: took 316.597012ms to configureAuth
	I0722 00:50:46.531395   72069 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:50:46.531551   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:50:46.531616   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.534088   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534495   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.534534   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.534733   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.534919   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535080   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.535198   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.535320   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.535470   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.535482   72069 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:50:46.792609   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:50:46.792646   72069 machine.go:97] duration metric: took 921.551541ms to provisionDockerMachine
	I0722 00:50:46.792660   72069 start.go:293] postStartSetup for "embed-certs-360389" (driver="kvm2")
	I0722 00:50:46.792673   72069 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:50:46.792699   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:46.793002   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:50:46.793030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.796062   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796509   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.796535   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.796677   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.796876   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.797012   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.797123   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:46.880839   72069 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:50:46.884726   72069 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:50:46.884747   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:50:46.884813   72069 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:50:46.884916   72069 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:50:46.885032   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:50:46.893669   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:46.915508   72069 start.go:296] duration metric: took 122.834675ms for postStartSetup
	I0722 00:50:46.915553   72069 fix.go:56] duration metric: took 20.8921124s for fixHost
	I0722 00:50:46.915579   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:46.918388   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918822   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:46.918852   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:46.918959   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:46.919175   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919347   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:46.919515   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:46.919683   72069 main.go:141] libmachine: Using SSH client type: native
	I0722 00:50:46.919861   72069 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0722 00:50:46.919875   72069 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:50:47.022951   72069 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609447.006036489
	
	I0722 00:50:47.022980   72069 fix.go:216] guest clock: 1721609447.006036489
	I0722 00:50:47.022991   72069 fix.go:229] Guest: 2024-07-22 00:50:47.006036489 +0000 UTC Remote: 2024-07-22 00:50:46.915558854 +0000 UTC m=+215.550003867 (delta=90.477635ms)
	I0722 00:50:47.023036   72069 fix.go:200] guest clock delta is within tolerance: 90.477635ms
	I0722 00:50:47.023045   72069 start.go:83] releasing machines lock for "embed-certs-360389", held for 20.999640853s
	I0722 00:50:47.023075   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.023311   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:47.025940   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026256   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.026288   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.026388   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.026847   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027038   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:50:47.027124   72069 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:50:47.027176   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.027241   72069 ssh_runner.go:195] Run: cat /version.json
	I0722 00:50:47.027272   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:50:47.029889   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030109   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030267   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030430   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030539   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:47.030575   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:47.030622   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.030769   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.030862   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:50:47.030961   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.031068   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:50:47.031244   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:50:47.031415   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:50:47.107073   72069 ssh_runner.go:195] Run: systemctl --version
	I0722 00:50:47.141152   72069 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:50:47.282293   72069 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:50:47.288370   72069 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:50:47.288442   72069 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:50:47.307784   72069 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:50:47.307806   72069 start.go:495] detecting cgroup driver to use...
	I0722 00:50:47.307865   72069 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:50:47.327947   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:50:47.343602   72069 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:50:47.343677   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:50:47.358451   72069 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:50:47.372164   72069 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:50:47.490397   72069 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:50:47.674470   72069 docker.go:233] disabling docker service ...
	I0722 00:50:47.674552   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:50:47.694816   72069 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:50:47.709552   72069 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:50:47.848196   72069 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:50:47.983458   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:50:47.997354   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:50:48.014833   72069 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:50:48.014891   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.024945   72069 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:50:48.025007   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.036104   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.047711   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.058020   72069 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:50:48.069295   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.079444   72069 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.096380   72069 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:50:48.106559   72069 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:50:48.115381   72069 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:50:48.115439   72069 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:50:48.129780   72069 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:50:48.138800   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:48.260463   72069 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:50:48.406174   72069 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:50:48.406253   72069 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:50:48.411126   72069 start.go:563] Will wait 60s for crictl version
	I0722 00:50:48.411192   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:50:48.414636   72069 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:50:48.452194   72069 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:50:48.452280   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.478442   72069 ssh_runner.go:195] Run: crio --version
	I0722 00:50:48.510555   72069 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:50:48.511723   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetIP
	I0722 00:50:48.514821   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515200   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:50:48.515227   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:50:48.515516   72069 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0722 00:50:48.519493   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:48.532650   72069 kubeadm.go:883] updating cluster {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:50:48.532787   72069 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:50:48.532848   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:48.570179   72069 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:50:48.570252   72069 ssh_runner.go:195] Run: which lz4
	I0722 00:50:48.574085   72069 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:50:48.578247   72069 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:50:48.578279   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:50:49.938250   72069 crio.go:462] duration metric: took 1.364193638s to copy over tarball
	I0722 00:50:49.938347   72069 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:50:47.048055   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Start
	I0722 00:50:47.048246   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring networks are active...
	I0722 00:50:47.048952   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network default is active
	I0722 00:50:47.049477   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Ensuring network mk-default-k8s-diff-port-214905 is active
	I0722 00:50:47.049877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Getting domain xml...
	I0722 00:50:47.050571   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Creating domain...
	I0722 00:50:48.347353   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting to get IP...
	I0722 00:50:48.348112   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348442   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.348510   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.348437   73117 retry.go:31] will retry after 231.852881ms: waiting for machine to come up
	I0722 00:50:48.581882   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582385   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.582420   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.582328   73117 retry.go:31] will retry after 274.458597ms: waiting for machine to come up
	I0722 00:50:48.858786   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859344   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:48.859376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:48.859303   73117 retry.go:31] will retry after 470.345038ms: waiting for machine to come up
	I0722 00:50:49.331004   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331545   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.331577   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.331475   73117 retry.go:31] will retry after 503.309601ms: waiting for machine to come up
	I0722 00:50:49.836108   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836714   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:49.836742   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:49.836621   73117 retry.go:31] will retry after 647.219852ms: waiting for machine to come up
	I0722 00:50:50.485174   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485816   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:50.485848   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:50.485763   73117 retry.go:31] will retry after 728.915406ms: waiting for machine to come up
	I0722 00:50:51.216722   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217043   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:51.217074   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:51.216992   73117 retry.go:31] will retry after 1.152926855s: waiting for machine to come up
	I0722 00:50:48.461230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.960910   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.460899   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:49.960401   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.461045   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:50.960474   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:51.961268   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.460893   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:52.960284   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:48.768413   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:50.769789   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.769882   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:52.297428   72069 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.359050025s)
	I0722 00:50:52.297450   72069 crio.go:469] duration metric: took 2.359170648s to extract the tarball
	I0722 00:50:52.297457   72069 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:50:52.338131   72069 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:50:52.385152   72069 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:50:52.385171   72069 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:50:52.385179   72069 kubeadm.go:934] updating node { 192.168.72.32 8443 v1.30.3 crio true true} ...
	I0722 00:50:52.385284   72069 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-360389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:50:52.385368   72069 ssh_runner.go:195] Run: crio config
	I0722 00:50:52.430760   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:52.430786   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:52.430798   72069 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:50:52.430816   72069 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-360389 NodeName:embed-certs-360389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:50:52.430935   72069 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-360389"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:50:52.430996   72069 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:50:52.440519   72069 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:50:52.440585   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:50:52.449409   72069 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 00:50:52.466546   72069 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:50:52.485895   72069 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0722 00:50:52.502760   72069 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I0722 00:50:52.506370   72069 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:50:52.517656   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:50:52.666627   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:50:52.683677   72069 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389 for IP: 192.168.72.32
	I0722 00:50:52.683705   72069 certs.go:194] generating shared ca certs ...
	I0722 00:50:52.683727   72069 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:50:52.683914   72069 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:50:52.683982   72069 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:50:52.683996   72069 certs.go:256] generating profile certs ...
	I0722 00:50:52.684118   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/client.key
	I0722 00:50:52.684214   72069 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key.67e111e7
	I0722 00:50:52.684280   72069 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key
	I0722 00:50:52.684447   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:50:52.684495   72069 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:50:52.684507   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:50:52.684541   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:50:52.684572   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:50:52.684603   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:50:52.684657   72069 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:50:52.685501   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:50:52.732873   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:50:52.765982   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:50:52.801537   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:50:52.839015   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0722 00:50:52.864056   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:50:52.889671   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:50:52.914643   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/embed-certs-360389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:50:52.938302   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:50:52.960789   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:50:52.990797   72069 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:50:53.013992   72069 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:50:53.032979   72069 ssh_runner.go:195] Run: openssl version
	I0722 00:50:53.040299   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:50:53.051624   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055835   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.055910   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:50:53.061573   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:50:53.072645   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:50:53.082920   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087177   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.087222   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:50:53.092824   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:50:53.103725   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:50:53.114567   72069 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118736   72069 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.118813   72069 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:50:53.124186   72069 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:50:53.134877   72069 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:50:53.139267   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:50:53.147216   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:50:53.155304   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:50:53.163301   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:50:53.169704   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:50:53.177562   72069 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:50:53.183189   72069 kubeadm.go:392] StartCluster: {Name:embed-certs-360389 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-360389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:50:53.183275   72069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:50:53.183336   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.217868   72069 cri.go:89] found id: ""
	I0722 00:50:53.217972   72069 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:50:53.227890   72069 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:50:53.227910   72069 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:50:53.227960   72069 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:50:53.237729   72069 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:50:53.239328   72069 kubeconfig.go:125] found "embed-certs-360389" server: "https://192.168.72.32:8443"
	I0722 00:50:53.242521   72069 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:50:53.251869   72069 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.32
	I0722 00:50:53.251905   72069 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:50:53.251915   72069 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:50:53.251967   72069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:50:53.293190   72069 cri.go:89] found id: ""
	I0722 00:50:53.293286   72069 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:50:53.311306   72069 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:50:53.321626   72069 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:50:53.321656   72069 kubeadm.go:157] found existing configuration files:
	
	I0722 00:50:53.321708   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:50:53.331267   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:50:53.331331   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:50:53.340503   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:50:53.348895   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:50:53.348962   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:50:53.359474   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.369258   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:50:53.369321   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:50:53.378465   72069 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:50:53.387122   72069 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:50:53.387180   72069 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:50:53.396233   72069 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:50:53.406018   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:53.535750   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.448623   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.665182   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.758554   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:50:54.874087   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:50:54.874187   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.374526   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.874701   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.940361   72069 api_server.go:72] duration metric: took 1.066273178s to wait for apiserver process to appear ...
	I0722 00:50:55.940394   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:50:55.940417   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:55.941027   72069 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I0722 00:50:52.371679   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372124   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:52.372154   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:52.372074   73117 retry.go:31] will retry after 1.417897172s: waiting for machine to come up
	I0722 00:50:53.791313   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791783   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:53.791823   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:53.791737   73117 retry.go:31] will retry after 1.482508019s: waiting for machine to come up
	I0722 00:50:55.275630   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:55.276044   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:55.275985   73117 retry.go:31] will retry after 2.294358884s: waiting for machine to come up
	I0722 00:50:53.461303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:53.960356   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.461276   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:54.960708   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.460934   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.960980   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:56.961161   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.461070   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:57.960557   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:55.266725   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:57.266981   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:50:56.441470   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.644223   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.644279   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.644307   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.692976   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:50:58.693011   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:50:58.941437   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:58.996818   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:58.996860   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.441379   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.449521   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:50:59.449558   72069 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:50:59.941151   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:50:59.948899   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:50:59.957451   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:50:59.957482   72069 api_server.go:131] duration metric: took 4.017081577s to wait for apiserver health ...
	I0722 00:50:59.957490   72069 cni.go:84] Creating CNI manager for ""
	I0722 00:50:59.957496   72069 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:50:59.959463   72069 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:50:59.960972   72069 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:50:59.973358   72069 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:50:59.996477   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:00.011497   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:00.011530   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:00.011537   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:00.011543   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:00.011548   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:00.011555   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:51:00.011562   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:00.011569   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:00.011574   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:51:00.011588   72069 system_pods.go:74] duration metric: took 15.088386ms to wait for pod list to return data ...
	I0722 00:51:00.011600   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:00.014410   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:00.014434   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:00.014443   72069 node_conditions.go:105] duration metric: took 2.83771ms to run NodePressure ...
	I0722 00:51:00.014459   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:00.277522   72069 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281620   72069 kubeadm.go:739] kubelet initialised
	I0722 00:51:00.281644   72069 kubeadm.go:740] duration metric: took 4.098751ms waiting for restarted kubelet to initialise ...
	I0722 00:51:00.281652   72069 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:00.286332   72069 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.290670   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290691   72069 pod_ready.go:81] duration metric: took 4.337546ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.290699   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.290705   72069 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.294203   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294223   72069 pod_ready.go:81] duration metric: took 3.5095ms for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.294234   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "etcd-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.294240   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.297870   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297889   72069 pod_ready.go:81] duration metric: took 3.639162ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.297899   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.297907   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.399718   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399749   72069 pod_ready.go:81] duration metric: took 101.831539ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.399760   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.399772   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:00.800353   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800390   72069 pod_ready.go:81] duration metric: took 400.607179ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:00.800404   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-proxy-8j7bx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:00.800413   72069 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:01.199482   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199514   72069 pod_ready.go:81] duration metric: took 399.092927ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.199526   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.199534   72069 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:50:57.571594   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572139   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:57.572162   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:57.572109   73117 retry.go:31] will retry after 1.96079151s: waiting for machine to come up
	I0722 00:50:59.534290   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534749   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:50:59.534773   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:50:59.534683   73117 retry.go:31] will retry after 3.106225743s: waiting for machine to come up
	I0722 00:51:01.600138   72069 pod_ready.go:97] node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600162   72069 pod_ready.go:81] duration metric: took 400.618311ms for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:51:01.600171   72069 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-360389" hosting pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:01.600177   72069 pod_ready.go:38] duration metric: took 1.318514842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:01.600194   72069 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:51:01.611349   72069 ops.go:34] apiserver oom_adj: -16
	I0722 00:51:01.611372   72069 kubeadm.go:597] duration metric: took 8.383454887s to restartPrimaryControlPlane
	I0722 00:51:01.611379   72069 kubeadm.go:394] duration metric: took 8.42819594s to StartCluster
	I0722 00:51:01.611396   72069 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.611480   72069 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:51:01.613127   72069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:01.613406   72069 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:51:01.613519   72069 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:51:01.613588   72069 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-360389"
	I0722 00:51:01.613592   72069 config.go:182] Loaded profile config "embed-certs-360389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:01.613610   72069 addons.go:69] Setting default-storageclass=true in profile "embed-certs-360389"
	I0722 00:51:01.613629   72069 addons.go:69] Setting metrics-server=true in profile "embed-certs-360389"
	I0722 00:51:01.613634   72069 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-360389"
	W0722 00:51:01.613642   72069 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:51:01.613652   72069 addons.go:234] Setting addon metrics-server=true in "embed-certs-360389"
	W0722 00:51:01.613658   72069 addons.go:243] addon metrics-server should already be in state true
	I0722 00:51:01.613674   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613680   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.613642   72069 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-360389"
	I0722 00:51:01.614224   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614252   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614280   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614331   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.614730   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.614807   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.616230   72069 out.go:177] * Verifying Kubernetes components...
	I0722 00:51:01.617895   72069 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:01.631426   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0722 00:51:01.631925   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.632483   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.632519   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.632909   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.633499   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.633546   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.634409   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0722 00:51:01.634453   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0722 00:51:01.634915   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.634921   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.635379   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635393   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.635396   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635410   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.635742   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635783   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.635921   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.636364   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.636397   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.639407   72069 addons.go:234] Setting addon default-storageclass=true in "embed-certs-360389"
	W0722 00:51:01.639433   72069 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:51:01.639463   72069 host.go:66] Checking if "embed-certs-360389" exists ...
	I0722 00:51:01.639862   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.639902   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.649428   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0722 00:51:01.649959   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.650438   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.650454   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.650876   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.651094   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.651395   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0722 00:51:01.651796   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.652255   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.652285   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.652634   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.652785   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.652809   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654284   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.654712   72069 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:51:01.655877   72069 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:51:01.656785   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:51:01.656804   72069 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:51:01.656821   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.657584   72069 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.657601   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:51:01.657619   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.659326   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0722 00:51:01.659901   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.660150   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660614   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.660637   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.660732   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660759   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.660926   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.660951   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.660964   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.660977   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661039   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.661057   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.661235   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.661406   72069 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:51:01.661411   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.661419   72069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:51:01.661556   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661721   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.661723   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.661835   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.676175   72069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0722 00:51:01.676615   72069 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:51:01.677082   72069 main.go:141] libmachine: Using API Version  1
	I0722 00:51:01.677109   72069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:51:01.677452   72069 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:51:01.677647   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetState
	I0722 00:51:01.679166   72069 main.go:141] libmachine: (embed-certs-360389) Calling .DriverName
	I0722 00:51:01.679360   72069 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.679373   72069 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:51:01.679385   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHHostname
	I0722 00:51:01.681804   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682121   72069 main.go:141] libmachine: (embed-certs-360389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:4e:22", ip: ""} in network mk-embed-certs-360389: {Iface:virbr4 ExpiryTime:2024-07-22 01:50:37 +0000 UTC Type:0 Mac:52:54:00:bc:4e:22 Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:embed-certs-360389 Clientid:01:52:54:00:bc:4e:22}
	I0722 00:51:01.682156   72069 main.go:141] libmachine: (embed-certs-360389) DBG | domain embed-certs-360389 has defined IP address 192.168.72.32 and MAC address 52:54:00:bc:4e:22 in network mk-embed-certs-360389
	I0722 00:51:01.682289   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHPort
	I0722 00:51:01.682445   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHKeyPath
	I0722 00:51:01.682593   72069 main.go:141] libmachine: (embed-certs-360389) Calling .GetSSHUsername
	I0722 00:51:01.682725   72069 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/embed-certs-360389/id_rsa Username:docker}
	I0722 00:51:01.803002   72069 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:01.819424   72069 node_ready.go:35] waiting up to 6m0s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:01.882197   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:51:01.889557   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:51:01.889578   72069 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:51:01.896485   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:51:01.928750   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:51:01.928784   72069 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:51:01.968904   72069 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:01.968937   72069 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:51:01.992585   72069 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:51:02.835971   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.835999   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836000   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836013   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836280   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836281   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836298   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836297   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836307   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836302   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836316   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836333   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.836346   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836369   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.836562   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836579   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.836722   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.836737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.836755   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.842016   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.842030   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.842229   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.842248   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845216   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845229   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845505   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845522   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845522   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.845532   72069 main.go:141] libmachine: Making call to close driver server
	I0722 00:51:02.845540   72069 main.go:141] libmachine: (embed-certs-360389) Calling .Close
	I0722 00:51:02.845737   72069 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:51:02.845748   72069 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:51:02.845757   72069 addons.go:475] Verifying addon metrics-server=true in "embed-certs-360389"
	I0722 00:51:02.845763   72069 main.go:141] libmachine: (embed-certs-360389) DBG | Closing plugin on server side
	I0722 00:51:02.847683   72069 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:50:58.460682   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:58.961066   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.960543   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.460539   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:00.960410   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.460841   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:01.960247   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.461159   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:02.960892   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:50:59.267841   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:01.268220   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:02.848943   72069 addons.go:510] duration metric: took 1.235424601s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:51:03.824209   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:06.323498   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:02.642573   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.642983   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | unable to find current IP address of domain default-k8s-diff-port-214905 in network mk-default-k8s-diff-port-214905
	I0722 00:51:02.643011   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | I0722 00:51:02.642955   73117 retry.go:31] will retry after 3.615938149s: waiting for machine to come up
	I0722 00:51:06.261423   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262022   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Found IP for machine: 192.168.61.97
	I0722 00:51:06.262058   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has current primary IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.262076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserving static IP address...
	I0722 00:51:06.262581   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.262624   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | skip adding static IP to network mk-default-k8s-diff-port-214905 - found existing host DHCP lease matching {name: "default-k8s-diff-port-214905", mac: "52:54:00:8d:14:d0", ip: "192.168.61.97"}
	I0722 00:51:06.262645   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Reserved static IP address: 192.168.61.97
	I0722 00:51:06.262660   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Getting to WaitForSSH function...
	I0722 00:51:06.262673   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Waiting for SSH to be available...
	I0722 00:51:06.265582   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.265939   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.265966   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.266145   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH client type: external
	I0722 00:51:06.266169   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa (-rw-------)
	I0722 00:51:06.266206   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0722 00:51:06.266234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | About to run SSH command:
	I0722 00:51:06.266252   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | exit 0
	I0722 00:51:06.390383   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | SSH cmd err, output: <nil>: 
	I0722 00:51:06.390769   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetConfigRaw
	I0722 00:51:06.391433   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.393871   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394198   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.394230   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.394497   71227 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/config.json ...
	I0722 00:51:06.394707   71227 machine.go:94] provisionDockerMachine start ...
	I0722 00:51:06.394726   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:06.394909   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.397075   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397398   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.397427   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.397586   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.397771   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.397908   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.398076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.398248   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.398459   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.398470   71227 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:51:06.506700   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:51:06.506731   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.506963   71227 buildroot.go:166] provisioning hostname "default-k8s-diff-port-214905"
	I0722 00:51:06.506986   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.507183   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.509855   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.510256   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.510376   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.510576   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510799   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.510958   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.511134   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.511310   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.511323   71227 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-214905 && echo "default-k8s-diff-port-214905" | sudo tee /etc/hostname
	I0722 00:51:03.460261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.961120   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.461171   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:04.961255   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.461282   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:05.960635   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.460360   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:06.960377   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.460438   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:07.960499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:03.768274   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.268010   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:06.628589   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-214905
	
	I0722 00:51:06.628640   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.631366   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.631809   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.631839   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.632098   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.632294   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632471   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.632633   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.632834   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:06.632999   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:06.633016   71227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-214905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-214905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-214905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:51:06.747587   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:51:06.747617   71227 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-5094/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-5094/.minikube}
	I0722 00:51:06.747634   71227 buildroot.go:174] setting up certificates
	I0722 00:51:06.747660   71227 provision.go:84] configureAuth start
	I0722 00:51:06.747668   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetMachineName
	I0722 00:51:06.747962   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:06.750710   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751142   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.751178   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.751395   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.754054   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754396   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.754426   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.754709   71227 provision.go:143] copyHostCerts
	I0722 00:51:06.754776   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem, removing ...
	I0722 00:51:06.754788   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem
	I0722 00:51:06.754847   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/ca.pem (1082 bytes)
	I0722 00:51:06.754946   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem, removing ...
	I0722 00:51:06.754954   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem
	I0722 00:51:06.754975   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/cert.pem (1123 bytes)
	I0722 00:51:06.755037   71227 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem, removing ...
	I0722 00:51:06.755043   71227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem
	I0722 00:51:06.755060   71227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-5094/.minikube/key.pem (1679 bytes)
	I0722 00:51:06.755122   71227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-214905 san=[127.0.0.1 192.168.61.97 default-k8s-diff-port-214905 localhost minikube]
	I0722 00:51:06.848932   71227 provision.go:177] copyRemoteCerts
	I0722 00:51:06.848987   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:51:06.849007   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:06.851953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852361   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:06.852392   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:06.852559   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:06.852750   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:06.852931   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:06.853090   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:06.939951   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:51:06.967820   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0722 00:51:06.996502   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0722 00:51:07.025122   71227 provision.go:87] duration metric: took 277.451ms to configureAuth
	I0722 00:51:07.025148   71227 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:51:07.025334   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:51:07.025435   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.029027   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029371   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.029405   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.029656   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.029887   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030059   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.030218   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.030455   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.030683   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.030715   71227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0722 00:51:07.298997   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0722 00:51:07.299023   71227 machine.go:97] duration metric: took 904.303148ms to provisionDockerMachine
	I0722 00:51:07.299034   71227 start.go:293] postStartSetup for "default-k8s-diff-port-214905" (driver="kvm2")
	I0722 00:51:07.299043   71227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:51:07.299062   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.299370   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:51:07.299400   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.302453   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.302850   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.302877   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.303025   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.303210   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.303486   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.303645   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.384902   71227 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:51:07.388858   71227 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:51:07.388879   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/addons for local assets ...
	I0722 00:51:07.388951   71227 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-5094/.minikube/files for local assets ...
	I0722 00:51:07.389043   71227 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem -> 122632.pem in /etc/ssl/certs
	I0722 00:51:07.389153   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:51:07.398326   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:07.423998   71227 start.go:296] duration metric: took 124.953045ms for postStartSetup
	I0722 00:51:07.424038   71227 fix.go:56] duration metric: took 20.400846293s for fixHost
	I0722 00:51:07.424056   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.426626   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.426970   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.426997   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.427120   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.427314   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427454   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.427554   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.427702   71227 main.go:141] libmachine: Using SSH client type: native
	I0722 00:51:07.427866   71227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0722 00:51:07.427875   71227 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:51:07.535404   71227 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721609467.506036600
	
	I0722 00:51:07.535428   71227 fix.go:216] guest clock: 1721609467.506036600
	I0722 00:51:07.535438   71227 fix.go:229] Guest: 2024-07-22 00:51:07.5060366 +0000 UTC Remote: 2024-07-22 00:51:07.424041395 +0000 UTC m=+355.867052958 (delta=81.995205ms)
	I0722 00:51:07.535465   71227 fix.go:200] guest clock delta is within tolerance: 81.995205ms
	I0722 00:51:07.535472   71227 start.go:83] releasing machines lock for "default-k8s-diff-port-214905", held for 20.512313153s
	I0722 00:51:07.535489   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.535744   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:07.538163   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.538490   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.538658   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539103   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539307   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:51:07.539409   71227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 00:51:07.539460   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.539491   71227 ssh_runner.go:195] Run: cat /version.json
	I0722 00:51:07.539512   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:51:07.542221   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542254   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542584   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542631   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542661   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:07.542683   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:07.542776   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542913   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:51:07.542961   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543086   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:51:07.543227   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543234   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:51:07.543398   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.543418   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:51:07.619357   71227 ssh_runner.go:195] Run: systemctl --version
	I0722 00:51:07.656949   71227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0722 00:51:07.798616   71227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:51:07.804187   71227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:51:07.804248   71227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:51:07.819247   71227 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:51:07.819270   71227 start.go:495] detecting cgroup driver to use...
	I0722 00:51:07.819332   71227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:51:07.837221   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:51:07.851412   71227 docker.go:217] disabling cri-docker service (if available) ...
	I0722 00:51:07.851505   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0722 00:51:07.865291   71227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0722 00:51:07.879430   71227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0722 00:51:07.997765   71227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0722 00:51:08.135988   71227 docker.go:233] disabling docker service ...
	I0722 00:51:08.136067   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0722 00:51:08.150346   71227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0722 00:51:08.163889   71227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0722 00:51:08.298086   71227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0722 00:51:08.419369   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0722 00:51:08.432606   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:51:08.449828   71227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0722 00:51:08.449907   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.459533   71227 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0722 00:51:08.459611   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.470121   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.480501   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.490487   71227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:51:08.500851   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.511182   71227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.529185   71227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0722 00:51:08.539257   71227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:51:08.548621   71227 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0722 00:51:08.548682   71227 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0722 00:51:08.561344   71227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:51:08.571236   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:08.678632   71227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0722 00:51:08.828128   71227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0722 00:51:08.828202   71227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0722 00:51:08.832759   71227 start.go:563] Will wait 60s for crictl version
	I0722 00:51:08.832815   71227 ssh_runner.go:195] Run: which crictl
	I0722 00:51:08.836611   71227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:51:08.879895   71227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0722 00:51:08.879978   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.913531   71227 ssh_runner.go:195] Run: crio --version
	I0722 00:51:08.943249   71227 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0722 00:51:08.823503   72069 node_ready.go:53] node "embed-certs-360389" has status "Ready":"False"
	I0722 00:51:09.328534   72069 node_ready.go:49] node "embed-certs-360389" has status "Ready":"True"
	I0722 00:51:09.328575   72069 node_ready.go:38] duration metric: took 7.509115209s for node "embed-certs-360389" to be "Ready" ...
	I0722 00:51:09.328587   72069 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:09.340718   72069 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349817   72069 pod_ready.go:92] pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:09.349844   72069 pod_ready.go:81] duration metric: took 9.091894ms for pod "coredns-7db6d8ff4d-7mzsv" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:09.349857   72069 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:11.356268   72069 pod_ready.go:102] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:08.944467   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetIP
	I0722 00:51:08.947436   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.947806   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:51:08.947838   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:51:08.948037   71227 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0722 00:51:08.952129   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:08.966560   71227 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:51:08.966753   71227 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0722 00:51:08.966821   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:09.005650   71227 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0722 00:51:09.005706   71227 ssh_runner.go:195] Run: which lz4
	I0722 00:51:09.009590   71227 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:51:09.014529   71227 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:51:09.014556   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0722 00:51:10.301898   71227 crio.go:462] duration metric: took 1.292341881s to copy over tarball
	I0722 00:51:10.301974   71227 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:51:08.460296   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.960703   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.460345   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:09.961107   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.460717   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:10.960649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.460994   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:11.960400   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.460826   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:12.960914   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:08.268664   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:10.768410   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:13.356194   72069 pod_ready.go:92] pod "etcd-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.356217   72069 pod_ready.go:81] duration metric: took 4.006352581s for pod "etcd-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.356229   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360601   72069 pod_ready.go:92] pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.360626   72069 pod_ready.go:81] duration metric: took 4.389152ms for pod "kube-apiserver-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.360635   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.364988   72069 pod_ready.go:92] pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.365009   72069 pod_ready.go:81] duration metric: took 4.367584ms for pod "kube-controller-manager-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.365018   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369552   72069 pod_ready.go:92] pod "kube-proxy-8j7bx" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.369590   72069 pod_ready.go:81] duration metric: took 4.555193ms for pod "kube-proxy-8j7bx" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.369598   72069 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373933   72069 pod_ready.go:92] pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:13.373956   72069 pod_ready.go:81] duration metric: took 4.351858ms for pod "kube-scheduler-embed-certs-360389" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:13.373968   72069 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:15.645600   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:12.606722   71227 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304710499s)
	I0722 00:51:12.606759   71227 crio.go:469] duration metric: took 2.304831492s to extract the tarball
	I0722 00:51:12.606769   71227 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:51:12.645926   71227 ssh_runner.go:195] Run: sudo crictl images --output json
	I0722 00:51:12.690525   71227 crio.go:514] all images are preloaded for cri-o runtime.
	I0722 00:51:12.690572   71227 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:51:12.690593   71227 kubeadm.go:934] updating node { 192.168.61.97 8444 v1.30.3 crio true true} ...
	I0722 00:51:12.690794   71227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-214905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:51:12.690871   71227 ssh_runner.go:195] Run: crio config
	I0722 00:51:12.740592   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:12.740615   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:12.740623   71227 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:51:12.740642   71227 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-214905 NodeName:default-k8s-diff-port-214905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:51:12.740775   71227 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-214905"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:51:12.740829   71227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:51:12.750624   71227 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:51:12.750699   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 00:51:12.760315   71227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0722 00:51:12.776686   71227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:51:12.793077   71227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0722 00:51:12.809852   71227 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0722 00:51:12.813854   71227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:51:12.826255   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:51:12.936768   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:51:12.951993   71227 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905 for IP: 192.168.61.97
	I0722 00:51:12.952018   71227 certs.go:194] generating shared ca certs ...
	I0722 00:51:12.952041   71227 certs.go:226] acquiring lock for ca certs: {Name:mk670e7dec7f1b116dfecf047bc459d9ed15ed73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:51:12.952217   71227 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key
	I0722 00:51:12.952303   71227 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key
	I0722 00:51:12.952318   71227 certs.go:256] generating profile certs ...
	I0722 00:51:12.952424   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/client.key
	I0722 00:51:12.952492   71227 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key.1c3eb547
	I0722 00:51:12.952528   71227 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key
	I0722 00:51:12.952667   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem (1338 bytes)
	W0722 00:51:12.952717   71227 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263_empty.pem, impossibly tiny 0 bytes
	I0722 00:51:12.952730   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 00:51:12.952759   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/ca.pem (1082 bytes)
	I0722 00:51:12.952780   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/cert.pem (1123 bytes)
	I0722 00:51:12.952809   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/certs/key.pem (1679 bytes)
	I0722 00:51:12.952859   71227 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem (1708 bytes)
	I0722 00:51:12.953537   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:51:12.993389   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:51:13.025618   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:51:13.053137   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 00:51:13.078098   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0722 00:51:13.118233   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:51:13.149190   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:51:13.172594   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/default-k8s-diff-port-214905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:51:13.195689   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/ssl/certs/122632.pem --> /usr/share/ca-certificates/122632.pem (1708 bytes)
	I0722 00:51:13.217891   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:51:13.240012   71227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-5094/.minikube/certs/12263.pem --> /usr/share/ca-certificates/12263.pem (1338 bytes)
	I0722 00:51:13.261671   71227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:51:13.278737   71227 ssh_runner.go:195] Run: openssl version
	I0722 00:51:13.284102   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:51:13.294324   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298340   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.298410   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:51:13.303783   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:51:13.314594   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12263.pem && ln -fs /usr/share/ca-certificates/12263.pem /etc/ssl/certs/12263.pem"
	I0722 00:51:13.326814   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331323   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:37 /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.331392   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12263.pem
	I0722 00:51:13.337168   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12263.pem /etc/ssl/certs/51391683.0"
	I0722 00:51:13.348896   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122632.pem && ln -fs /usr/share/ca-certificates/122632.pem /etc/ssl/certs/122632.pem"
	I0722 00:51:13.361441   71227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367064   71227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:37 /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.367126   71227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122632.pem
	I0722 00:51:13.372922   71227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/122632.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:51:13.383463   71227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:51:13.387997   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 00:51:13.393574   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 00:51:13.399343   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 00:51:13.405063   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 00:51:13.410536   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 00:51:13.415992   71227 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 00:51:13.421792   71227 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-214905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-214905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:51:13.421865   71227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0722 00:51:13.421944   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.457829   71227 cri.go:89] found id: ""
	I0722 00:51:13.457900   71227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:51:13.468393   71227 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 00:51:13.468417   71227 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 00:51:13.468474   71227 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 00:51:13.478824   71227 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:51:13.480024   71227 kubeconfig.go:125] found "default-k8s-diff-port-214905" server: "https://192.168.61.97:8444"
	I0722 00:51:13.482294   71227 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 00:51:13.491655   71227 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0722 00:51:13.491688   71227 kubeadm.go:1160] stopping kube-system containers ...
	I0722 00:51:13.491702   71227 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0722 00:51:13.491744   71227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0722 00:51:13.530988   71227 cri.go:89] found id: ""
	I0722 00:51:13.531061   71227 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 00:51:13.547834   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:51:13.557388   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:51:13.557408   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:51:13.557459   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:51:13.565947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:51:13.566004   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:51:13.575773   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:51:13.584661   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:51:13.584725   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:51:13.593454   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.601675   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:51:13.601720   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:51:13.610111   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:51:13.618310   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:51:13.618378   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:51:13.626981   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:51:13.635633   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:13.734700   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.654298   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.847590   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:14.917375   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:15.033414   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:51:15.033507   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.534351   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.034349   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.534006   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.460935   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.960254   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.461295   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:14.961095   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:15.961261   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.460761   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:16.961046   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.461110   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.960374   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:13.267650   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:15.519718   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.767440   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.880346   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:20.379826   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:17.034032   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.533910   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:17.549689   71227 api_server.go:72] duration metric: took 2.516274534s to wait for apiserver process to appear ...
	I0722 00:51:17.549723   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:51:17.549751   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.315281   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.315307   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.315319   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.344103   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 00:51:20.344130   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 00:51:20.550597   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:20.555109   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:20.555136   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.050717   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.054938   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.054972   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:21.550554   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:21.557083   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 00:51:21.557107   71227 api_server.go:103] status: https://192.168.61.97:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 00:51:22.049799   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:51:22.054794   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:51:22.062149   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:51:22.062174   71227 api_server.go:131] duration metric: took 4.512443714s to wait for apiserver health ...
	I0722 00:51:22.062185   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:51:22.062193   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:51:22.064007   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:51:18.460962   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:18.960851   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.460803   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:19.960496   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.460310   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.960330   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.460661   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:21.960882   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.460368   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:22.960371   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:20.266940   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.270501   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.380407   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:24.882109   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:22.065398   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:51:22.104936   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:51:22.128599   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:51:22.144519   71227 system_pods.go:59] 8 kube-system pods found
	I0722 00:51:22.144564   71227 system_pods.go:61] "coredns-7db6d8ff4d-tr5z2" [99882921-755a-43ff-85d5-2611575a0d4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:51:22.144590   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [5dbe4051-cba2-4a87-bfce-374e73365459] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 00:51:22.144602   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [5b2a4be9-37e0-44f3-bb3a-0d6183aa03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 00:51:22.144629   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [a7ab910f-e924-42fe-8f94-72a7e4c76fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 00:51:22.144643   71227 system_pods.go:61] "kube-proxy-4mnlj" [66f982d3-2434-4a4c-b8a1-b914fcd96183] Running
	I0722 00:51:22.144653   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [9912ec07-7cc5-4357-9def-00138d7996e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 00:51:22.144662   71227 system_pods.go:61] "metrics-server-569cc877fc-dm7k7" [05792ec6-8c4f-41db-9d49-78cebc0a5056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:51:22.144674   71227 system_pods.go:61] "storage-provisioner" [a4dafb4f-67d0-4168-9a54-6039d6629a67] Running
	I0722 00:51:22.144684   71227 system_pods.go:74] duration metric: took 16.064556ms to wait for pod list to return data ...
	I0722 00:51:22.144694   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:51:22.148289   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:51:22.148315   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:51:22.148326   71227 node_conditions.go:105] duration metric: took 3.621544ms to run NodePressure ...
	I0722 00:51:22.148341   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 00:51:22.413008   71227 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420071   71227 kubeadm.go:739] kubelet initialised
	I0722 00:51:22.420101   71227 kubeadm.go:740] duration metric: took 7.0676ms waiting for restarted kubelet to initialise ...
	I0722 00:51:22.420112   71227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:51:22.427282   71227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:24.433443   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:26.434366   71227 pod_ready.go:102] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:23.461091   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:23.960522   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.461076   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.961287   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.460347   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:25.961093   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.460471   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:26.960627   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.460795   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:27.961158   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:24.767672   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.380050   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:29.380929   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:27.432965   71227 pod_ready.go:92] pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:27.432986   71227 pod_ready.go:81] duration metric: took 5.00567238s for pod "coredns-7db6d8ff4d-tr5z2" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:27.433006   71227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:29.440533   71227 pod_ready.go:102] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:30.438931   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:30.438953   71227 pod_ready.go:81] duration metric: took 3.005939036s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:30.438962   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:28.460674   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:28.960359   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.461175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.960355   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.461217   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:30.961166   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.460949   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:31.960689   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.460297   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:32.961236   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:29.768011   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.267005   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:31.880242   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:34.380628   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.380937   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:32.445699   71227 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.946588   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.946631   71227 pod_ready.go:81] duration metric: took 3.507660629s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.946652   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951860   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.951880   71227 pod_ready.go:81] duration metric: took 5.22074ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.951889   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956269   71227 pod_ready.go:92] pod "kube-proxy-4mnlj" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:33.956288   71227 pod_ready.go:81] duration metric: took 4.393239ms for pod "kube-proxy-4mnlj" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:33.956298   71227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462509   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:51:34.462533   71227 pod_ready.go:81] duration metric: took 506.228194ms for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:34.462543   71227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	I0722 00:51:36.468873   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:33.461324   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:33.960311   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.461151   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:34.960568   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.460309   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:35.961227   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:35.961294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:35.999379   71766 cri.go:89] found id: ""
	I0722 00:51:35.999411   71766 logs.go:276] 0 containers: []
	W0722 00:51:35.999419   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:35.999426   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:35.999475   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:36.031077   71766 cri.go:89] found id: ""
	I0722 00:51:36.031110   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.031121   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:36.031128   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:36.031190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:36.064269   71766 cri.go:89] found id: ""
	I0722 00:51:36.064298   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.064306   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:36.064311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:36.064377   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:36.100853   71766 cri.go:89] found id: ""
	I0722 00:51:36.100886   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.100894   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:36.100899   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:36.100954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:36.138653   71766 cri.go:89] found id: ""
	I0722 00:51:36.138683   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.138693   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:36.138699   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:36.138780   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:36.175032   71766 cri.go:89] found id: ""
	I0722 00:51:36.175059   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.175069   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:36.175076   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:36.175132   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:36.212622   71766 cri.go:89] found id: ""
	I0722 00:51:36.212658   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.212670   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:36.212678   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:36.212731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:36.256399   71766 cri.go:89] found id: ""
	I0722 00:51:36.256422   71766 logs.go:276] 0 containers: []
	W0722 00:51:36.256429   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:36.256437   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:36.256448   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:36.310091   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:36.310123   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:36.326208   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:36.326250   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:36.453140   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:36.453166   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:36.453183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:36.516035   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:36.516069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:34.267563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:36.267895   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.381166   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.880622   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:38.968268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.968730   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:39.053668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:39.066584   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:39.066662   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:39.102829   71766 cri.go:89] found id: ""
	I0722 00:51:39.102856   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.102864   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:39.102869   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:39.102936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:39.135461   71766 cri.go:89] found id: ""
	I0722 00:51:39.135492   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.135500   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:39.135506   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:39.135563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:39.170506   71766 cri.go:89] found id: ""
	I0722 00:51:39.170531   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.170538   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:39.170543   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:39.170621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:39.208238   71766 cri.go:89] found id: ""
	I0722 00:51:39.208271   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.208279   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:39.208284   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:39.208334   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:39.241323   71766 cri.go:89] found id: ""
	I0722 00:51:39.241352   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.241362   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:39.241368   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:39.241431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:39.276693   71766 cri.go:89] found id: ""
	I0722 00:51:39.276719   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.276729   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:39.276735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:39.276782   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:39.328340   71766 cri.go:89] found id: ""
	I0722 00:51:39.328367   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.328375   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:39.328380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:39.328437   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:39.361403   71766 cri.go:89] found id: ""
	I0722 00:51:39.361430   71766 logs.go:276] 0 containers: []
	W0722 00:51:39.361440   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:39.361451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:39.361465   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:39.411739   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:39.411773   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:39.424447   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:39.424479   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:39.496323   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:39.496343   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:39.496363   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:39.565321   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:39.565358   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:42.104230   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:42.116488   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:42.116555   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:42.149582   71766 cri.go:89] found id: ""
	I0722 00:51:42.149612   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.149620   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:42.149625   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:42.149683   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:42.186140   71766 cri.go:89] found id: ""
	I0722 00:51:42.186168   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.186180   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:42.186187   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:42.186242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:42.217238   71766 cri.go:89] found id: ""
	I0722 00:51:42.217269   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.217281   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:42.217290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:42.217363   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:42.251090   71766 cri.go:89] found id: ""
	I0722 00:51:42.251118   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.251128   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:42.251135   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:42.251192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:42.287241   71766 cri.go:89] found id: ""
	I0722 00:51:42.287268   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.287275   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:42.287281   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:42.287346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:42.319322   71766 cri.go:89] found id: ""
	I0722 00:51:42.319348   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.319358   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:42.319364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:42.319439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:42.352085   71766 cri.go:89] found id: ""
	I0722 00:51:42.352114   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.352121   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:42.352127   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:42.352174   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:42.384984   71766 cri.go:89] found id: ""
	I0722 00:51:42.385012   71766 logs.go:276] 0 containers: []
	W0722 00:51:42.385023   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:42.385032   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:42.385052   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:42.437821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:42.437864   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:42.453172   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:42.453200   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:42.524666   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:42.524690   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:42.524704   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:42.596367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:42.596412   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:38.766280   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:40.767271   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.768887   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:43.380094   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.380125   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:42.969140   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.469669   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:45.135754   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.149463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:45.149520   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:45.186219   71766 cri.go:89] found id: ""
	I0722 00:51:45.186253   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.186262   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:45.186268   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:45.186317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:45.218081   71766 cri.go:89] found id: ""
	I0722 00:51:45.218103   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.218111   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:45.218116   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:45.218181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:45.250347   71766 cri.go:89] found id: ""
	I0722 00:51:45.250381   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.250391   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:45.250397   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:45.250449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:45.283925   71766 cri.go:89] found id: ""
	I0722 00:51:45.283953   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.283963   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:45.283969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:45.284030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:45.315958   71766 cri.go:89] found id: ""
	I0722 00:51:45.315987   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.315998   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:45.316004   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:45.316064   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:45.348880   71766 cri.go:89] found id: ""
	I0722 00:51:45.348930   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.348955   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:45.348969   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:45.349030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:45.385443   71766 cri.go:89] found id: ""
	I0722 00:51:45.385471   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.385479   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:45.385485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:45.385533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:45.426489   71766 cri.go:89] found id: ""
	I0722 00:51:45.426517   71766 logs.go:276] 0 containers: []
	W0722 00:51:45.426528   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:45.426538   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:45.426553   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:45.476896   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:45.476929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:45.490177   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:45.490208   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:45.560925   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:45.560949   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:45.560963   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:45.635924   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:45.635968   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:48.174520   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:45.268969   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.767012   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.380416   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.881006   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:47.967835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:49.968777   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:48.188181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:48.188248   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:48.220697   71766 cri.go:89] found id: ""
	I0722 00:51:48.220720   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.220728   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:48.220733   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:48.220779   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:48.255161   71766 cri.go:89] found id: ""
	I0722 00:51:48.255195   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.255204   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:48.255211   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:48.255267   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:48.290010   71766 cri.go:89] found id: ""
	I0722 00:51:48.290034   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.290041   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:48.290047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:48.290104   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:48.323348   71766 cri.go:89] found id: ""
	I0722 00:51:48.323373   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.323383   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:48.323389   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:48.323449   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:48.355890   71766 cri.go:89] found id: ""
	I0722 00:51:48.355915   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.355925   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:48.355932   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:48.355990   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:48.390126   71766 cri.go:89] found id: ""
	I0722 00:51:48.390153   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.390163   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:48.390169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:48.390228   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:48.423639   71766 cri.go:89] found id: ""
	I0722 00:51:48.423672   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.423681   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:48.423687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:48.423737   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:48.456411   71766 cri.go:89] found id: ""
	I0722 00:51:48.456434   71766 logs.go:276] 0 containers: []
	W0722 00:51:48.456441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:48.456449   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:48.456460   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:48.510928   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:48.510960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:48.524328   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:48.524356   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:48.595665   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:48.595687   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:48.595702   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:48.678579   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:48.678622   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:51.216641   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:51.229921   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:51.229977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:51.263501   71766 cri.go:89] found id: ""
	I0722 00:51:51.263534   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.263543   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:51.263566   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:51.263627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:51.297587   71766 cri.go:89] found id: ""
	I0722 00:51:51.297621   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.297630   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:51.297636   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:51.297693   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:51.333367   71766 cri.go:89] found id: ""
	I0722 00:51:51.333389   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.333397   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:51.333403   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:51.333450   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:51.370404   71766 cri.go:89] found id: ""
	I0722 00:51:51.370432   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.370439   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:51.370445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:51.370496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:51.405224   71766 cri.go:89] found id: ""
	I0722 00:51:51.405254   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.405264   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:51.405272   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:51.405329   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:51.444786   71766 cri.go:89] found id: ""
	I0722 00:51:51.444815   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.444823   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:51.444828   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:51.444882   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:51.488370   71766 cri.go:89] found id: ""
	I0722 00:51:51.488399   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.488410   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:51.488417   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:51.488476   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:51.533358   71766 cri.go:89] found id: ""
	I0722 00:51:51.533388   71766 logs.go:276] 0 containers: []
	W0722 00:51:51.533398   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:51.533408   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:51.533421   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:51.593455   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:51.593485   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:51.607485   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:51.607511   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:51.680006   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:51.680029   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:51.680050   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:51.760863   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:51.760896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:49.767585   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.767748   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:52.380304   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.381124   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:51.968932   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.469798   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:54.298738   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:54.311256   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:54.311317   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:54.346909   71766 cri.go:89] found id: ""
	I0722 00:51:54.346941   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.346953   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:54.346961   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:54.347057   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:54.381744   71766 cri.go:89] found id: ""
	I0722 00:51:54.381769   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.381779   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:54.381784   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:54.381855   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:54.414782   71766 cri.go:89] found id: ""
	I0722 00:51:54.414806   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.414814   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:54.414819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:54.414877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:54.446679   71766 cri.go:89] found id: ""
	I0722 00:51:54.446710   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.446722   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:54.446730   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:54.446798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:54.481334   71766 cri.go:89] found id: ""
	I0722 00:51:54.481361   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.481372   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:54.481380   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:54.481445   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:54.515843   71766 cri.go:89] found id: ""
	I0722 00:51:54.515870   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.515879   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:54.515885   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:54.515936   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:54.551631   71766 cri.go:89] found id: ""
	I0722 00:51:54.551657   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.551667   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:54.551674   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:54.551746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:54.584743   71766 cri.go:89] found id: ""
	I0722 00:51:54.584784   71766 logs.go:276] 0 containers: []
	W0722 00:51:54.584797   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:54.584808   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:54.584821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:54.660162   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:54.660197   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.702746   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:54.702777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:54.758639   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:54.758683   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:54.773203   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:54.773227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:54.842504   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.343055   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:51:57.357285   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:51:57.357367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:51:57.391222   71766 cri.go:89] found id: ""
	I0722 00:51:57.391248   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.391258   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:51:57.391265   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:51:57.391324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:51:57.427831   71766 cri.go:89] found id: ""
	I0722 00:51:57.427864   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.427873   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:51:57.427880   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:51:57.427945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:51:57.463553   71766 cri.go:89] found id: ""
	I0722 00:51:57.463582   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.463593   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:51:57.463599   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:51:57.463667   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:51:57.496603   71766 cri.go:89] found id: ""
	I0722 00:51:57.496630   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.496638   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:51:57.496643   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:51:57.496690   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:51:57.528071   71766 cri.go:89] found id: ""
	I0722 00:51:57.528097   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.528108   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:51:57.528115   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:51:57.528175   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:51:57.560950   71766 cri.go:89] found id: ""
	I0722 00:51:57.560974   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.560982   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:51:57.560987   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:51:57.561030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:51:57.594826   71766 cri.go:89] found id: ""
	I0722 00:51:57.594856   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.594872   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:51:57.594880   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:51:57.594941   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:51:57.626279   71766 cri.go:89] found id: ""
	I0722 00:51:57.626320   71766 logs.go:276] 0 containers: []
	W0722 00:51:57.626331   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:51:57.626340   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:51:57.626354   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:51:57.675395   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:51:57.675428   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:51:57.688703   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:51:57.688740   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:51:57.757062   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:57.757082   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:51:57.757095   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:51:57.833964   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:51:57.833995   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:51:54.267185   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.267224   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.880401   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.379846   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.380981   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:56.968753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:51:59.470232   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.371828   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:00.385006   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:00.385073   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:00.419004   71766 cri.go:89] found id: ""
	I0722 00:52:00.419030   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.419038   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:00.419043   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:00.419100   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:00.453855   71766 cri.go:89] found id: ""
	I0722 00:52:00.453882   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.453892   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:00.453900   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:00.453963   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:00.488118   71766 cri.go:89] found id: ""
	I0722 00:52:00.488152   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.488163   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:00.488174   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:00.488236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:00.522251   71766 cri.go:89] found id: ""
	I0722 00:52:00.522277   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.522285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:00.522290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:00.522349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:00.557269   71766 cri.go:89] found id: ""
	I0722 00:52:00.557297   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.557305   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:00.557311   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:00.557367   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:00.592355   71766 cri.go:89] found id: ""
	I0722 00:52:00.592389   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.592401   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:00.592408   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:00.592486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:00.626543   71766 cri.go:89] found id: ""
	I0722 00:52:00.626569   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.626576   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:00.626582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:00.626650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:00.659641   71766 cri.go:89] found id: ""
	I0722 00:52:00.659662   71766 logs.go:276] 0 containers: []
	W0722 00:52:00.659670   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:00.659678   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:00.659688   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:00.736338   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:00.736380   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:00.774823   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:00.774852   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:00.826186   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:00.826222   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:00.840191   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:00.840227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:00.906902   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:51:58.268641   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:00.766938   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:02.767254   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.880694   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.380080   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:01.967784   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.969465   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:06.468358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:03.407246   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:03.419754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:03.419822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:03.456294   71766 cri.go:89] found id: ""
	I0722 00:52:03.456327   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.456334   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:03.456342   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:03.456391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:03.490314   71766 cri.go:89] found id: ""
	I0722 00:52:03.490337   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.490345   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:03.490350   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:03.490402   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:03.522266   71766 cri.go:89] found id: ""
	I0722 00:52:03.522295   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.522313   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:03.522320   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:03.522385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:03.554323   71766 cri.go:89] found id: ""
	I0722 00:52:03.554358   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.554369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:03.554377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:03.554443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:03.589633   71766 cri.go:89] found id: ""
	I0722 00:52:03.589657   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.589664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:03.589669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:03.589718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:03.626086   71766 cri.go:89] found id: ""
	I0722 00:52:03.626112   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.626120   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:03.626125   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:03.626171   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:03.659628   71766 cri.go:89] found id: ""
	I0722 00:52:03.659655   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.659665   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:03.659671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:03.659729   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:03.694415   71766 cri.go:89] found id: ""
	I0722 00:52:03.694444   71766 logs.go:276] 0 containers: []
	W0722 00:52:03.694460   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:03.694471   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:03.694487   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:03.744456   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:03.744497   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:03.757444   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:03.757470   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:03.822888   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:03.822912   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:03.822923   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:03.898806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:03.898838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:06.445112   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:06.457755   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:06.457836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:06.490886   71766 cri.go:89] found id: ""
	I0722 00:52:06.490907   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.490914   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:06.490920   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:06.490977   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:06.522528   71766 cri.go:89] found id: ""
	I0722 00:52:06.522555   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.522563   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:06.522568   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:06.522648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:06.552993   71766 cri.go:89] found id: ""
	I0722 00:52:06.553023   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.553033   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:06.553041   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:06.553102   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:06.584128   71766 cri.go:89] found id: ""
	I0722 00:52:06.584153   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.584161   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:06.584166   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:06.584230   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:06.615920   71766 cri.go:89] found id: ""
	I0722 00:52:06.615944   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.615952   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:06.615957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:06.616013   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:06.651832   71766 cri.go:89] found id: ""
	I0722 00:52:06.651857   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.651865   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:06.651870   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:06.651916   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:06.683799   71766 cri.go:89] found id: ""
	I0722 00:52:06.683826   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.683836   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:06.683842   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:06.683900   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:06.718586   71766 cri.go:89] found id: ""
	I0722 00:52:06.718630   71766 logs.go:276] 0 containers: []
	W0722 00:52:06.718647   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:06.718657   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:06.718675   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:06.768787   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:06.768818   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:06.782465   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:06.782488   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:06.853738   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:06.853757   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:06.853772   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:06.938782   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:06.938821   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:05.266865   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:07.267037   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.880530   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.382898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:08.969967   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:10.970679   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:09.476016   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:09.489675   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:09.489746   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:09.522128   71766 cri.go:89] found id: ""
	I0722 00:52:09.522160   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.522179   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:09.522188   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:09.522260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:09.556074   71766 cri.go:89] found id: ""
	I0722 00:52:09.556107   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.556118   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:09.556125   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:09.556182   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:09.586592   71766 cri.go:89] found id: ""
	I0722 00:52:09.586650   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.586661   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:09.586669   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:09.586734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:09.618242   71766 cri.go:89] found id: ""
	I0722 00:52:09.618273   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.618285   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:09.618292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:09.618362   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:09.649844   71766 cri.go:89] found id: ""
	I0722 00:52:09.649874   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.649884   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:09.649892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:09.649955   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:09.682863   71766 cri.go:89] found id: ""
	I0722 00:52:09.682890   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.682898   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:09.682905   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:09.682964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:09.714215   71766 cri.go:89] found id: ""
	I0722 00:52:09.714244   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.714254   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:09.714259   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:09.714308   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:09.750916   71766 cri.go:89] found id: ""
	I0722 00:52:09.750944   71766 logs.go:276] 0 containers: []
	W0722 00:52:09.750954   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:09.750964   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:09.750979   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:09.832038   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:09.832081   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.868528   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:09.868560   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:09.928196   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:09.928227   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:09.942388   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:09.942418   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:10.021483   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.521868   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:12.534648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:12.534718   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:12.566448   71766 cri.go:89] found id: ""
	I0722 00:52:12.566479   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.566490   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:12.566497   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:12.566553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:12.598007   71766 cri.go:89] found id: ""
	I0722 00:52:12.598034   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.598042   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:12.598047   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:12.598108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:12.629240   71766 cri.go:89] found id: ""
	I0722 00:52:12.629266   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.629273   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:12.629278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:12.629346   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:12.664580   71766 cri.go:89] found id: ""
	I0722 00:52:12.664605   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.664620   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:12.664627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:12.664701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:12.701789   71766 cri.go:89] found id: ""
	I0722 00:52:12.701830   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.701838   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:12.701844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:12.701911   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:12.739553   71766 cri.go:89] found id: ""
	I0722 00:52:12.739581   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.739589   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:12.739595   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:12.739643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:12.774254   71766 cri.go:89] found id: ""
	I0722 00:52:12.774281   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.774290   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:12.774296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:12.774368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:12.809794   71766 cri.go:89] found id: ""
	I0722 00:52:12.809833   71766 logs.go:276] 0 containers: []
	W0722 00:52:12.809844   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:12.809853   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:12.809866   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:12.862302   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:12.862344   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:12.875459   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:12.875495   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:12.952319   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:12.952340   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:12.952360   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:13.033287   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:13.033322   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:09.267496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:11.268205   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.879513   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.880586   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:13.469483   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.970493   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.578384   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:15.591158   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:15.591236   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:15.623545   71766 cri.go:89] found id: ""
	I0722 00:52:15.623568   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.623577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:15.623583   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:15.623650   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:15.656309   71766 cri.go:89] found id: ""
	I0722 00:52:15.656337   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.656347   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:15.656354   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:15.656415   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:15.691305   71766 cri.go:89] found id: ""
	I0722 00:52:15.691333   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.691341   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:15.691346   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:15.691399   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:15.723356   71766 cri.go:89] found id: ""
	I0722 00:52:15.723382   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.723389   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:15.723395   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:15.723452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:15.758917   71766 cri.go:89] found id: ""
	I0722 00:52:15.758939   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.758949   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:15.758956   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:15.759022   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:15.792619   71766 cri.go:89] found id: ""
	I0722 00:52:15.792641   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.792649   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:15.792654   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:15.792713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:15.828078   71766 cri.go:89] found id: ""
	I0722 00:52:15.828101   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.828115   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:15.828131   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:15.828198   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:15.864210   71766 cri.go:89] found id: ""
	I0722 00:52:15.864239   71766 logs.go:276] 0 containers: []
	W0722 00:52:15.864250   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:15.864259   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:15.864271   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:15.918696   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:15.918742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:15.933790   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:15.933817   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:16.010940   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:16.010958   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:16.010972   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:16.092542   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:16.092582   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:13.766713   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:15.768232   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.379974   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.880215   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.468830   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.968643   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:18.630499   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:18.643726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:18.643791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:18.680192   71766 cri.go:89] found id: ""
	I0722 00:52:18.680220   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.680230   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:18.680237   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:18.680297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:18.719370   71766 cri.go:89] found id: ""
	I0722 00:52:18.719397   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.719406   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:18.719411   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:18.719461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:18.760106   71766 cri.go:89] found id: ""
	I0722 00:52:18.760132   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.760143   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:18.760149   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:18.760211   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:18.792661   71766 cri.go:89] found id: ""
	I0722 00:52:18.792686   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.792694   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:18.792700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:18.792760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:18.828419   71766 cri.go:89] found id: ""
	I0722 00:52:18.828445   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.828455   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:18.828463   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:18.828522   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:18.864434   71766 cri.go:89] found id: ""
	I0722 00:52:18.864462   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.864471   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:18.864479   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:18.864536   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:18.898512   71766 cri.go:89] found id: ""
	I0722 00:52:18.898537   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.898548   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:18.898555   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:18.898638   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:18.931399   71766 cri.go:89] found id: ""
	I0722 00:52:18.931434   71766 logs.go:276] 0 containers: []
	W0722 00:52:18.931445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:18.931456   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:18.931469   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:18.985778   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:18.985812   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:18.999621   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:18.999649   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:19.079310   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:19.079333   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:19.079349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:19.159336   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:19.159373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:21.705449   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:21.718079   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:21.718136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:21.751749   71766 cri.go:89] found id: ""
	I0722 00:52:21.751778   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.751790   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:21.751799   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:21.751864   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:21.785265   71766 cri.go:89] found id: ""
	I0722 00:52:21.785287   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.785295   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:21.785301   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:21.785349   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:21.818726   71766 cri.go:89] found id: ""
	I0722 00:52:21.818760   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.818770   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:21.818779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:21.818845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:21.852033   71766 cri.go:89] found id: ""
	I0722 00:52:21.852065   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.852075   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:21.852084   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:21.852136   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:21.886285   71766 cri.go:89] found id: ""
	I0722 00:52:21.886315   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.886324   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:21.886330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:21.886388   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:21.918083   71766 cri.go:89] found id: ""
	I0722 00:52:21.918111   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.918121   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:21.918128   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:21.918196   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:21.953682   71766 cri.go:89] found id: ""
	I0722 00:52:21.953705   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.953712   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:21.953717   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:21.953765   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:21.987763   71766 cri.go:89] found id: ""
	I0722 00:52:21.987787   71766 logs.go:276] 0 containers: []
	W0722 00:52:21.987796   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:21.987804   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:21.987815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:22.028236   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:22.028265   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:22.078821   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:22.078858   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:22.092023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:22.092048   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:22.164255   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:22.164281   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:22.164295   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:18.267051   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:20.268460   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.765953   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:23.379851   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:25.380352   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:22.968779   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.969210   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:24.741954   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:24.754664   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:24.754734   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:24.787652   71766 cri.go:89] found id: ""
	I0722 00:52:24.787680   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.787691   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:24.787698   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:24.787760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:24.821756   71766 cri.go:89] found id: ""
	I0722 00:52:24.821778   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.821786   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:24.821792   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:24.821836   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:24.855624   71766 cri.go:89] found id: ""
	I0722 00:52:24.855656   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.855668   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:24.855677   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:24.855749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:24.892205   71766 cri.go:89] found id: ""
	I0722 00:52:24.892226   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.892233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:24.892239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:24.892294   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:24.929367   71766 cri.go:89] found id: ""
	I0722 00:52:24.929388   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.929395   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:24.929401   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:24.929447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:24.968712   71766 cri.go:89] found id: ""
	I0722 00:52:24.968737   71766 logs.go:276] 0 containers: []
	W0722 00:52:24.968747   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:24.968754   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:24.968816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:25.001350   71766 cri.go:89] found id: ""
	I0722 00:52:25.001379   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.001389   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:25.001396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:25.001463   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:25.038489   71766 cri.go:89] found id: ""
	I0722 00:52:25.038513   71766 logs.go:276] 0 containers: []
	W0722 00:52:25.038520   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:25.038527   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:25.038538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:25.108598   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:25.108627   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:25.108642   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:25.192813   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:25.192848   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:25.230825   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:25.230849   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:25.284873   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:25.284902   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:27.814540   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:27.827199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:27.827280   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:27.860243   71766 cri.go:89] found id: ""
	I0722 00:52:27.860272   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.860283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:27.860289   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:27.860357   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:27.895748   71766 cri.go:89] found id: ""
	I0722 00:52:27.895776   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.895785   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:27.895791   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:27.895854   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:27.929631   71766 cri.go:89] found id: ""
	I0722 00:52:27.929663   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.929675   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:27.929681   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:27.929749   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:27.963729   71766 cri.go:89] found id: ""
	I0722 00:52:27.963768   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.963779   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:27.963786   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:27.963845   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:27.997597   71766 cri.go:89] found id: ""
	I0722 00:52:27.997627   71766 logs.go:276] 0 containers: []
	W0722 00:52:27.997638   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:27.997645   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:27.997704   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:28.029689   71766 cri.go:89] found id: ""
	I0722 00:52:28.029712   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.029722   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:28.029729   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:28.029790   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:28.066005   71766 cri.go:89] found id: ""
	I0722 00:52:28.066086   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.066113   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:28.066122   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:28.066181   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:28.100274   71766 cri.go:89] found id: ""
	I0722 00:52:28.100300   71766 logs.go:276] 0 containers: []
	W0722 00:52:28.100308   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:28.100316   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:28.100342   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:24.767122   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:26.768557   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.381658   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.880191   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:27.469220   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:29.968001   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:28.183367   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:28.183401   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:28.218954   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:28.218989   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:28.266468   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:28.266498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:28.280954   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:28.280983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:28.344427   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:30.845577   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:30.858825   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:30.858884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:30.896926   71766 cri.go:89] found id: ""
	I0722 00:52:30.896955   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.896965   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:30.896973   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:30.897032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:30.933027   71766 cri.go:89] found id: ""
	I0722 00:52:30.933059   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.933070   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:30.933077   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:30.933129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:30.970925   71766 cri.go:89] found id: ""
	I0722 00:52:30.970951   71766 logs.go:276] 0 containers: []
	W0722 00:52:30.970961   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:30.970968   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:30.971036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:31.001860   71766 cri.go:89] found id: ""
	I0722 00:52:31.001889   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.001900   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:31.001908   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:31.001961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:31.039895   71766 cri.go:89] found id: ""
	I0722 00:52:31.039927   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.039938   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:31.039946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:31.040012   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:31.080112   71766 cri.go:89] found id: ""
	I0722 00:52:31.080139   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.080147   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:31.080153   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:31.080203   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:31.114966   71766 cri.go:89] found id: ""
	I0722 00:52:31.114989   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.114996   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:31.115002   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:31.115063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:31.147955   71766 cri.go:89] found id: ""
	I0722 00:52:31.147985   71766 logs.go:276] 0 containers: []
	W0722 00:52:31.147994   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:31.148008   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:31.148020   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:31.183969   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:31.184004   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:31.237561   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:31.237598   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:31.250850   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:31.250880   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:31.318996   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:31.319017   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:31.319031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:29.267019   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.267642   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.880620   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.381010   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.382154   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:31.969043   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:34.469119   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:33.903019   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:33.916373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:33.916452   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:33.952021   71766 cri.go:89] found id: ""
	I0722 00:52:33.952050   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.952060   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:33.952068   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:33.952130   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:33.988479   71766 cri.go:89] found id: ""
	I0722 00:52:33.988502   71766 logs.go:276] 0 containers: []
	W0722 00:52:33.988513   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:33.988520   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:33.988575   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:34.024941   71766 cri.go:89] found id: ""
	I0722 00:52:34.024966   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.024976   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:34.024983   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:34.025054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:34.061899   71766 cri.go:89] found id: ""
	I0722 00:52:34.061922   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.061929   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:34.061934   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:34.061978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:34.097241   71766 cri.go:89] found id: ""
	I0722 00:52:34.097266   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.097272   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:34.097278   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:34.097324   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:34.133447   71766 cri.go:89] found id: ""
	I0722 00:52:34.133472   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.133486   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:34.133495   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:34.133569   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:34.168985   71766 cri.go:89] found id: ""
	I0722 00:52:34.169013   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.169024   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:34.169033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:34.169093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:34.204926   71766 cri.go:89] found id: ""
	I0722 00:52:34.204961   71766 logs.go:276] 0 containers: []
	W0722 00:52:34.204973   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:34.204984   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:34.205001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:34.287024   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:34.287064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:34.326740   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:34.326766   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:34.379610   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:34.379648   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:34.395812   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:34.395833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:34.462638   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:36.963421   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:36.976297   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:36.976375   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:37.009022   71766 cri.go:89] found id: ""
	I0722 00:52:37.009048   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.009056   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:37.009062   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:37.009125   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:37.042741   71766 cri.go:89] found id: ""
	I0722 00:52:37.042769   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.042780   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:37.042786   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:37.042833   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:37.076534   71766 cri.go:89] found id: ""
	I0722 00:52:37.076563   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.076574   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:37.076582   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:37.076642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:37.109077   71766 cri.go:89] found id: ""
	I0722 00:52:37.109107   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.109118   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:37.109124   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:37.109179   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:37.142946   71766 cri.go:89] found id: ""
	I0722 00:52:37.142978   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.142988   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:37.142995   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:37.143055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:37.177145   71766 cri.go:89] found id: ""
	I0722 00:52:37.177174   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.177183   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:37.177189   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:37.177242   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:37.210379   71766 cri.go:89] found id: ""
	I0722 00:52:37.210408   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.210416   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:37.210422   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:37.210470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:37.243301   71766 cri.go:89] found id: ""
	I0722 00:52:37.243331   71766 logs.go:276] 0 containers: []
	W0722 00:52:37.243341   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:37.243353   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:37.243366   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:37.285705   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:37.285733   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:37.333569   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:37.333600   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:37.348189   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:37.348213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:37.417740   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:37.417763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:37.417778   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:33.767300   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:35.767587   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.880458   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.379709   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:36.968614   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:38.969746   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:41.468531   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:39.999065   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:40.011700   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:40.011768   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:40.044984   71766 cri.go:89] found id: ""
	I0722 00:52:40.045013   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.045022   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:40.045028   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:40.045074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:40.079176   71766 cri.go:89] found id: ""
	I0722 00:52:40.079202   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.079212   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:40.079219   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:40.079290   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:40.110972   71766 cri.go:89] found id: ""
	I0722 00:52:40.110998   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.111011   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:40.111017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:40.111075   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:40.144286   71766 cri.go:89] found id: ""
	I0722 00:52:40.144312   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.144320   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:40.144325   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:40.144383   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:40.179931   71766 cri.go:89] found id: ""
	I0722 00:52:40.179959   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.179969   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:40.179976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:40.180036   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:40.217209   71766 cri.go:89] found id: ""
	I0722 00:52:40.217237   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.217244   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:40.217249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:40.217296   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:40.250144   71766 cri.go:89] found id: ""
	I0722 00:52:40.250174   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.250183   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:40.250199   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:40.250266   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:40.284480   71766 cri.go:89] found id: ""
	I0722 00:52:40.284511   71766 logs.go:276] 0 containers: []
	W0722 00:52:40.284522   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:40.284536   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:40.284563   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:40.338271   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:40.338306   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:40.352450   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:40.352480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:40.418038   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:40.418059   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:40.418072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:40.495011   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:40.495043   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.035705   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:43.048744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:43.048803   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:43.080512   71766 cri.go:89] found id: ""
	I0722 00:52:43.080540   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.080550   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:43.080561   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:43.080614   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:43.114717   71766 cri.go:89] found id: ""
	I0722 00:52:43.114746   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.114757   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:43.114764   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:43.114824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:43.147117   71766 cri.go:89] found id: ""
	I0722 00:52:43.147143   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.147151   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:43.147156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:43.147207   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:38.266674   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:40.268425   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:42.767124   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.380636   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.380873   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.469751   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:45.967500   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:43.187468   71766 cri.go:89] found id: ""
	I0722 00:52:43.187500   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.187511   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:43.187517   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:43.187583   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:43.236569   71766 cri.go:89] found id: ""
	I0722 00:52:43.236592   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.236599   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:43.236604   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:43.236656   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:43.283383   71766 cri.go:89] found id: ""
	I0722 00:52:43.283410   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.283420   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:43.283426   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:43.283480   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:43.321118   71766 cri.go:89] found id: ""
	I0722 00:52:43.321151   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.321161   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:43.321169   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:43.321227   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:43.354982   71766 cri.go:89] found id: ""
	I0722 00:52:43.355014   71766 logs.go:276] 0 containers: []
	W0722 00:52:43.355026   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:43.355037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:43.355051   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:43.436402   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:43.436439   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:43.476061   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:43.476088   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:43.526963   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:43.527001   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:43.541987   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:43.542016   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:43.611431   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.112321   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:46.126102   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:46.126178   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:46.158497   71766 cri.go:89] found id: ""
	I0722 00:52:46.158519   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.158526   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:46.158531   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:46.158578   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:46.194017   71766 cri.go:89] found id: ""
	I0722 00:52:46.194040   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.194048   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:46.194057   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:46.194117   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:46.227514   71766 cri.go:89] found id: ""
	I0722 00:52:46.227541   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.227549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:46.227554   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:46.227610   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:46.261493   71766 cri.go:89] found id: ""
	I0722 00:52:46.261523   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.261532   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:46.261541   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:46.261600   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:46.295771   71766 cri.go:89] found id: ""
	I0722 00:52:46.295798   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.295808   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:46.295816   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:46.295880   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:46.327933   71766 cri.go:89] found id: ""
	I0722 00:52:46.327963   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.327974   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:46.327981   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:46.328050   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:46.365667   71766 cri.go:89] found id: ""
	I0722 00:52:46.365694   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.365705   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:46.365718   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:46.365783   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:46.402543   71766 cri.go:89] found id: ""
	I0722 00:52:46.402569   71766 logs.go:276] 0 containers: []
	W0722 00:52:46.402576   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:46.402585   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:46.402596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:46.456233   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:46.456270   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:46.469775   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:46.469802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:46.536502   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:46.536523   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:46.536534   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:46.612576   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:46.612616   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:44.768316   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.267720   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.381216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.383578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:47.968590   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.970425   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:49.152649   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:49.165328   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:49.165385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:49.200745   71766 cri.go:89] found id: ""
	I0722 00:52:49.200766   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.200773   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:49.200778   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:49.200835   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:49.233421   71766 cri.go:89] found id: ""
	I0722 00:52:49.233446   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.233456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:49.233463   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:49.233523   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:49.265803   71766 cri.go:89] found id: ""
	I0722 00:52:49.265834   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.265843   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:49.265850   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:49.265906   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:49.302910   71766 cri.go:89] found id: ""
	I0722 00:52:49.302936   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.302944   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:49.302949   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:49.303003   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:49.336666   71766 cri.go:89] found id: ""
	I0722 00:52:49.336709   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.336719   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:49.336726   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:49.336791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:49.369104   71766 cri.go:89] found id: ""
	I0722 00:52:49.369130   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.369140   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:49.369148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:49.369210   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:49.404102   71766 cri.go:89] found id: ""
	I0722 00:52:49.404126   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.404134   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:49.404139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:49.404190   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:49.436406   71766 cri.go:89] found id: ""
	I0722 00:52:49.436435   71766 logs.go:276] 0 containers: []
	W0722 00:52:49.436445   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:49.436455   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:49.436471   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:49.492183   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:49.492213   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:49.505476   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:49.505498   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:49.570495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:49.570522   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:49.570538   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:49.653195   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:49.653244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:52.189036   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:52.205048   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:52.205112   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:52.241144   71766 cri.go:89] found id: ""
	I0722 00:52:52.241173   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.241181   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:52.241186   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:52.241249   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:52.275124   71766 cri.go:89] found id: ""
	I0722 00:52:52.275148   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.275157   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:52.275164   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:52.275232   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:52.306816   71766 cri.go:89] found id: ""
	I0722 00:52:52.306842   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.306850   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:52.306855   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:52.306907   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:52.340579   71766 cri.go:89] found id: ""
	I0722 00:52:52.340602   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.340610   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:52.340615   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:52.340671   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:52.374786   71766 cri.go:89] found id: ""
	I0722 00:52:52.374808   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.374818   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:52.374824   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:52.374884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:52.409149   71766 cri.go:89] found id: ""
	I0722 00:52:52.409172   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.409180   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:52.409185   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:52.409243   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:52.441593   71766 cri.go:89] found id: ""
	I0722 00:52:52.441619   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.441627   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:52.441633   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:52.441689   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:52.474901   71766 cri.go:89] found id: ""
	I0722 00:52:52.474929   71766 logs.go:276] 0 containers: []
	W0722 00:52:52.474941   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:52.474952   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:52.475071   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:52.528173   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:52.528204   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:52.541353   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:52.541383   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:52.613194   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:52.613227   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:52.613244   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:52.692490   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:52.692522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:49.268032   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.768264   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:51.879436   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.380653   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:52.468894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:54.968161   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:55.228860   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:55.241365   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:55.241440   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:55.276098   71766 cri.go:89] found id: ""
	I0722 00:52:55.276122   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.276132   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:55.276139   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:55.276201   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:55.308959   71766 cri.go:89] found id: ""
	I0722 00:52:55.308988   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.308998   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:55.309006   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:55.309069   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:55.342417   71766 cri.go:89] found id: ""
	I0722 00:52:55.342441   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.342453   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:55.342459   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:55.342519   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:55.375020   71766 cri.go:89] found id: ""
	I0722 00:52:55.375046   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.375055   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:55.375061   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:55.375108   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:55.414659   71766 cri.go:89] found id: ""
	I0722 00:52:55.414683   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.414691   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:55.414697   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:55.414757   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:55.447651   71766 cri.go:89] found id: ""
	I0722 00:52:55.447688   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.447700   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:55.447707   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:55.447776   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:55.484598   71766 cri.go:89] found id: ""
	I0722 00:52:55.484645   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.484653   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:55.484658   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:55.484713   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:55.517053   71766 cri.go:89] found id: ""
	I0722 00:52:55.517078   71766 logs.go:276] 0 containers: []
	W0722 00:52:55.517086   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:55.517095   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:55.517106   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:55.572171   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:55.572205   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:55.585108   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:55.585136   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:55.653089   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:55.653112   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:55.653129   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:55.727661   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:55.727695   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:52:54.266242   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.267891   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.879845   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.880367   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.380235   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:56.968658   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:59.468263   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:01.471461   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:52:58.265891   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:52:58.279889   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:52:58.279949   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:52:58.315880   71766 cri.go:89] found id: ""
	I0722 00:52:58.315910   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.315919   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:52:58.315924   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:52:58.315981   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:52:58.351267   71766 cri.go:89] found id: ""
	I0722 00:52:58.351298   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.351311   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:52:58.351319   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:52:58.351391   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:52:58.386413   71766 cri.go:89] found id: ""
	I0722 00:52:58.386437   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.386446   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:52:58.386453   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:52:58.386507   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:52:58.424243   71766 cri.go:89] found id: ""
	I0722 00:52:58.424272   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.424283   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:52:58.424289   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:52:58.424350   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:52:58.458199   71766 cri.go:89] found id: ""
	I0722 00:52:58.458231   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.458244   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:52:58.458249   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:52:58.458297   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:52:58.492561   71766 cri.go:89] found id: ""
	I0722 00:52:58.492587   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.492596   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:52:58.492601   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:52:58.492665   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:52:58.524047   71766 cri.go:89] found id: ""
	I0722 00:52:58.524073   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.524081   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:52:58.524086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:52:58.524143   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:52:58.560282   71766 cri.go:89] found id: ""
	I0722 00:52:58.560311   71766 logs.go:276] 0 containers: []
	W0722 00:52:58.560322   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:52:58.560332   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:52:58.560343   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.610691   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:52:58.610732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:52:58.625098   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:52:58.625131   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:52:58.700876   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:52:58.700895   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:52:58.700948   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:52:58.775444   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:52:58.775480   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.313668   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:01.326288   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:01.326379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:01.360707   71766 cri.go:89] found id: ""
	I0722 00:53:01.360742   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.360753   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:01.360760   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:01.360822   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:01.393394   71766 cri.go:89] found id: ""
	I0722 00:53:01.393418   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.393426   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:01.393431   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:01.393494   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:01.436115   71766 cri.go:89] found id: ""
	I0722 00:53:01.436139   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.436146   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:01.436156   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:01.436205   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:01.471322   71766 cri.go:89] found id: ""
	I0722 00:53:01.471347   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.471364   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:01.471371   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:01.471431   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:01.504889   71766 cri.go:89] found id: ""
	I0722 00:53:01.504920   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.504933   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:01.504941   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:01.505009   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:01.537997   71766 cri.go:89] found id: ""
	I0722 00:53:01.538028   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.538039   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:01.538047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:01.538106   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:01.571151   71766 cri.go:89] found id: ""
	I0722 00:53:01.571176   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.571186   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:01.571192   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:01.571255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:01.603524   71766 cri.go:89] found id: ""
	I0722 00:53:01.603555   71766 logs.go:276] 0 containers: []
	W0722 00:53:01.603566   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:01.603577   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:01.603591   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:01.616646   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:01.616677   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:01.691623   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:01.691644   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:01.691663   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:01.772350   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:01.772381   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:01.811348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:01.811375   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:52:58.767563   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:00.767909   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:02.768338   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.380375   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.381808   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:03.968623   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:05.969573   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:04.362258   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:04.375428   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:04.375502   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:04.408573   71766 cri.go:89] found id: ""
	I0722 00:53:04.408608   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.408618   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:04.408626   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:04.408687   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:04.440685   71766 cri.go:89] found id: ""
	I0722 00:53:04.440711   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.440722   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:04.440729   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:04.440798   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:04.473842   71766 cri.go:89] found id: ""
	I0722 00:53:04.473871   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.473881   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:04.473892   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:04.473954   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:04.517943   71766 cri.go:89] found id: ""
	I0722 00:53:04.517980   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.517992   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:04.517998   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:04.518063   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:04.555896   71766 cri.go:89] found id: ""
	I0722 00:53:04.555924   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.555932   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:04.555938   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:04.555991   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:04.593086   71766 cri.go:89] found id: ""
	I0722 00:53:04.593121   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.593131   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:04.593139   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:04.593200   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:04.628182   71766 cri.go:89] found id: ""
	I0722 00:53:04.628207   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.628217   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:04.628224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:04.628288   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:04.659142   71766 cri.go:89] found id: ""
	I0722 00:53:04.659172   71766 logs.go:276] 0 containers: []
	W0722 00:53:04.659183   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:04.659194   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:04.659209   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:04.714648   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:04.714681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:04.728232   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:04.728261   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:04.798771   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:04.798798   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:04.798814   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:04.879698   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:04.879728   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:07.421303   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:07.434650   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:07.434731   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:07.470489   71766 cri.go:89] found id: ""
	I0722 00:53:07.470522   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.470531   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:07.470536   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:07.470595   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:07.503213   71766 cri.go:89] found id: ""
	I0722 00:53:07.503244   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.503255   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:07.503261   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:07.503326   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:07.539209   71766 cri.go:89] found id: ""
	I0722 00:53:07.539233   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.539242   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:07.539247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:07.539312   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:07.572940   71766 cri.go:89] found id: ""
	I0722 00:53:07.572963   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.572971   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:07.572976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:07.573032   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:07.607535   71766 cri.go:89] found id: ""
	I0722 00:53:07.607580   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.607591   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:07.607598   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:07.607659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:07.639035   71766 cri.go:89] found id: ""
	I0722 00:53:07.639063   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.639074   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:07.639082   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:07.639149   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:07.672721   71766 cri.go:89] found id: ""
	I0722 00:53:07.672749   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.672757   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:07.672762   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:07.672816   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:07.706536   71766 cri.go:89] found id: ""
	I0722 00:53:07.706560   71766 logs.go:276] 0 containers: []
	W0722 00:53:07.706568   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:07.706575   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:07.706587   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:07.762203   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:07.762240   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:07.776441   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:07.776468   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:07.843031   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:07.843051   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:07.843064   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:07.922322   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:07.922357   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:05.267484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.767192   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:07.880064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:09.881771   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:08.467736   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.468628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:10.462186   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:10.475400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:10.475478   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:10.508243   71766 cri.go:89] found id: ""
	I0722 00:53:10.508273   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.508285   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:10.508292   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:10.508382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:10.543620   71766 cri.go:89] found id: ""
	I0722 00:53:10.543647   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.543655   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:10.543661   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:10.543708   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:10.578730   71766 cri.go:89] found id: ""
	I0722 00:53:10.578760   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.578771   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:10.578778   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:10.578837   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:10.611531   71766 cri.go:89] found id: ""
	I0722 00:53:10.611560   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.611571   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:10.611578   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:10.611642   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:10.643294   71766 cri.go:89] found id: ""
	I0722 00:53:10.643326   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.643339   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:10.643347   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:10.643408   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:10.675476   71766 cri.go:89] found id: ""
	I0722 00:53:10.675500   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.675508   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:10.675514   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:10.675576   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:10.706847   71766 cri.go:89] found id: ""
	I0722 00:53:10.706875   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.706884   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:10.706891   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:10.706974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:10.739688   71766 cri.go:89] found id: ""
	I0722 00:53:10.739716   71766 logs.go:276] 0 containers: []
	W0722 00:53:10.739727   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:10.739737   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:10.739751   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:10.790747   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:10.790779   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:10.803845   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:10.803876   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:10.873807   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:10.873829   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:10.873851   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:10.962339   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:10.962376   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:10.266351   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.267385   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.380192   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.879663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:12.469268   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:14.967713   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:13.504523   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:13.518171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:13.518235   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:13.552429   71766 cri.go:89] found id: ""
	I0722 00:53:13.552453   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.552463   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:13.552470   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:13.552534   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:13.586452   71766 cri.go:89] found id: ""
	I0722 00:53:13.586496   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.586509   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:13.586519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:13.586593   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:13.619253   71766 cri.go:89] found id: ""
	I0722 00:53:13.619282   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.619290   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:13.619296   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:13.619347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:13.651110   71766 cri.go:89] found id: ""
	I0722 00:53:13.651133   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.651140   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:13.651145   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:13.651192   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:13.682986   71766 cri.go:89] found id: ""
	I0722 00:53:13.683016   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.683027   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:13.683033   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:13.683096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:13.716648   71766 cri.go:89] found id: ""
	I0722 00:53:13.716675   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.716684   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:13.716692   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:13.716753   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:13.748848   71766 cri.go:89] found id: ""
	I0722 00:53:13.748876   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.748888   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:13.748895   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:13.748956   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:13.784825   71766 cri.go:89] found id: ""
	I0722 00:53:13.784858   71766 logs.go:276] 0 containers: []
	W0722 00:53:13.784868   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:13.784879   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:13.784899   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:13.838744   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:13.838789   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:13.851868   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:13.851896   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:13.923467   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:13.923501   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:13.923517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:14.001685   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:14.001738   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:16.540709   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:16.553307   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:16.553382   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:16.589768   71766 cri.go:89] found id: ""
	I0722 00:53:16.589798   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.589809   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:16.589816   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:16.589883   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:16.621862   71766 cri.go:89] found id: ""
	I0722 00:53:16.621885   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.621894   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:16.621901   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:16.621970   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:16.652400   71766 cri.go:89] found id: ""
	I0722 00:53:16.652428   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.652439   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:16.652456   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:16.652529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:16.684295   71766 cri.go:89] found id: ""
	I0722 00:53:16.684327   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.684338   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:16.684345   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:16.684404   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:16.716809   71766 cri.go:89] found id: ""
	I0722 00:53:16.716838   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.716847   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:16.716852   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:16.716899   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:16.750432   71766 cri.go:89] found id: ""
	I0722 00:53:16.750468   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.750478   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:16.750485   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:16.750549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:16.783635   71766 cri.go:89] found id: ""
	I0722 00:53:16.783667   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.783679   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:16.783686   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:16.783760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:16.815792   71766 cri.go:89] found id: ""
	I0722 00:53:16.815822   71766 logs.go:276] 0 containers: []
	W0722 00:53:16.815832   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:16.815842   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:16.815860   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:16.828259   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:16.828294   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:16.902741   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:16.902774   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:16.902802   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:16.987806   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:16.987844   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:17.025177   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:17.025211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:14.267885   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.768206   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.881046   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.380211   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.381067   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:16.969448   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.468471   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:19.585513   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:19.597758   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:19.597832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:19.630982   71766 cri.go:89] found id: ""
	I0722 00:53:19.631021   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.631032   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:19.631038   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:19.631094   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:19.662962   71766 cri.go:89] found id: ""
	I0722 00:53:19.662987   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.662996   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:19.663001   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:19.663058   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:19.695580   71766 cri.go:89] found id: ""
	I0722 00:53:19.695613   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.695622   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:19.695627   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:19.695678   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:19.728134   71766 cri.go:89] found id: ""
	I0722 00:53:19.728162   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.728173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:19.728181   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:19.728234   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:19.759536   71766 cri.go:89] found id: ""
	I0722 00:53:19.759572   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.759584   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:19.759602   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:19.759691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:19.791286   71766 cri.go:89] found id: ""
	I0722 00:53:19.791319   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.791329   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:19.791335   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:19.791385   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:19.822924   71766 cri.go:89] found id: ""
	I0722 00:53:19.822950   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.822960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:19.822967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:19.823027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:19.860097   71766 cri.go:89] found id: ""
	I0722 00:53:19.860125   71766 logs.go:276] 0 containers: []
	W0722 00:53:19.860134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:19.860144   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:19.860159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:19.929148   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:19.929167   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:19.929179   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:20.009151   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:20.009183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:20.048092   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:20.048118   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:20.106309   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:20.106347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.620769   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:22.633544   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:22.633621   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:22.667517   71766 cri.go:89] found id: ""
	I0722 00:53:22.667564   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.667577   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:22.667585   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:22.667645   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:22.702036   71766 cri.go:89] found id: ""
	I0722 00:53:22.702060   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.702068   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:22.702073   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:22.702137   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:22.735505   71766 cri.go:89] found id: ""
	I0722 00:53:22.735538   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.735549   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:22.735556   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:22.735627   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:22.770433   71766 cri.go:89] found id: ""
	I0722 00:53:22.770459   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.770468   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:22.770475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:22.770533   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:22.825657   71766 cri.go:89] found id: ""
	I0722 00:53:22.825687   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.825698   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:22.825705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:22.825760   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:22.860883   71766 cri.go:89] found id: ""
	I0722 00:53:22.860916   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.860929   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:22.860937   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:22.861002   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:22.895645   71766 cri.go:89] found id: ""
	I0722 00:53:22.895668   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.895676   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:22.895680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:22.895759   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:22.937062   71766 cri.go:89] found id: ""
	I0722 00:53:22.937087   71766 logs.go:276] 0 containers: []
	W0722 00:53:22.937095   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:22.937103   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:22.937117   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:22.949975   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:22.950006   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:23.017282   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:23.017387   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:23.017411   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:23.093092   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:23.093125   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:23.130173   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:23.130201   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:19.267114   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.879712   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.880366   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:21.969497   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:23.969610   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:26.470072   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.683824   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:25.697279   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:25.697368   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:25.730208   71766 cri.go:89] found id: ""
	I0722 00:53:25.730230   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.730237   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:25.730243   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:25.730298   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:25.762201   71766 cri.go:89] found id: ""
	I0722 00:53:25.762228   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.762239   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:25.762246   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:25.762323   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:25.794899   71766 cri.go:89] found id: ""
	I0722 00:53:25.794928   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.794938   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:25.794946   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:25.795011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:25.827698   71766 cri.go:89] found id: ""
	I0722 00:53:25.827726   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.827737   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:25.827743   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:25.827793   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:25.859621   71766 cri.go:89] found id: ""
	I0722 00:53:25.859647   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.859655   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:25.859661   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:25.859711   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:25.892333   71766 cri.go:89] found id: ""
	I0722 00:53:25.892355   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.892368   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:25.892374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:25.892430   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:25.928601   71766 cri.go:89] found id: ""
	I0722 00:53:25.928630   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.928641   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:25.928648   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:25.928703   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:25.962888   71766 cri.go:89] found id: ""
	I0722 00:53:25.962913   71766 logs.go:276] 0 containers: []
	W0722 00:53:25.962924   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:25.962933   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:25.962951   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:26.032018   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:26.032037   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:26.032049   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:26.117675   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:26.117707   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:26.158906   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:26.158936   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:26.210768   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:26.210798   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:23.767556   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:25.767837   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:27.880422   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.380089   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.968462   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:31.469079   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:28.724411   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:28.738449   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:28.738527   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:28.772941   71766 cri.go:89] found id: ""
	I0722 00:53:28.772965   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.772976   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:28.772982   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:28.773030   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:28.812268   71766 cri.go:89] found id: ""
	I0722 00:53:28.812310   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.812321   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:28.812333   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:28.812395   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:28.845837   71766 cri.go:89] found id: ""
	I0722 00:53:28.845868   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.845879   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:28.845887   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:28.845945   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:28.881104   71766 cri.go:89] found id: ""
	I0722 00:53:28.881132   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.881141   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:28.881148   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:28.881206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:28.914020   71766 cri.go:89] found id: ""
	I0722 00:53:28.914043   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.914053   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:28.914060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:28.914118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:28.949764   71766 cri.go:89] found id: ""
	I0722 00:53:28.949790   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.949798   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:28.949804   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:28.949856   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:28.984463   71766 cri.go:89] found id: ""
	I0722 00:53:28.984493   71766 logs.go:276] 0 containers: []
	W0722 00:53:28.984504   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:28.984511   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:28.984573   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:29.017963   71766 cri.go:89] found id: ""
	I0722 00:53:29.017991   71766 logs.go:276] 0 containers: []
	W0722 00:53:29.018001   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:29.018011   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:29.018025   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:29.069551   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:29.069585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:29.082425   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:29.082452   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:29.151845   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:29.151869   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:29.151885   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:29.238904   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:29.238939   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:31.813691   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:31.826086   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:31.826148   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:31.857979   71766 cri.go:89] found id: ""
	I0722 00:53:31.858006   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.858017   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:31.858025   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:31.858074   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:31.890332   71766 cri.go:89] found id: ""
	I0722 00:53:31.890364   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.890372   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:31.890377   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:31.890422   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:31.926431   71766 cri.go:89] found id: ""
	I0722 00:53:31.926458   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.926467   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:31.926472   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:31.926537   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:31.960445   71766 cri.go:89] found id: ""
	I0722 00:53:31.960475   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.960483   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:31.960489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:31.960540   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:31.999765   71766 cri.go:89] found id: ""
	I0722 00:53:31.999802   71766 logs.go:276] 0 containers: []
	W0722 00:53:31.999810   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:31.999815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:31.999872   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:32.030453   71766 cri.go:89] found id: ""
	I0722 00:53:32.030476   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.030484   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:32.030489   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:32.030542   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:32.063446   71766 cri.go:89] found id: ""
	I0722 00:53:32.063481   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.063493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:32.063501   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:32.063581   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:32.100104   71766 cri.go:89] found id: ""
	I0722 00:53:32.100127   71766 logs.go:276] 0 containers: []
	W0722 00:53:32.100134   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:32.100142   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:32.100156   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:32.151231   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:32.151267   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:32.165999   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:32.166028   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:32.233365   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:32.233393   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:32.233407   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:32.311482   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:32.311520   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:28.267209   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:30.766397   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.768020   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:32.879747   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.880865   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:33.967894   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.470912   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:34.853608   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:34.867670   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:34.867736   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:34.904455   71766 cri.go:89] found id: ""
	I0722 00:53:34.904480   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.904488   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:34.904494   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:34.904553   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:34.942226   71766 cri.go:89] found id: ""
	I0722 00:53:34.942255   71766 logs.go:276] 0 containers: []
	W0722 00:53:34.942265   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:34.942272   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:34.942343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:35.006723   71766 cri.go:89] found id: ""
	I0722 00:53:35.006749   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.006761   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:35.006767   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:35.006831   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:35.043118   71766 cri.go:89] found id: ""
	I0722 00:53:35.043149   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.043160   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:35.043171   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:35.043238   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:35.079622   71766 cri.go:89] found id: ""
	I0722 00:53:35.079653   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.079664   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:35.079671   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:35.079748   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:35.112773   71766 cri.go:89] found id: ""
	I0722 00:53:35.112795   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.112807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:35.112813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:35.112873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.148463   71766 cri.go:89] found id: ""
	I0722 00:53:35.148486   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.148493   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:35.148502   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:35.148563   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:35.183594   71766 cri.go:89] found id: ""
	I0722 00:53:35.183620   71766 logs.go:276] 0 containers: []
	W0722 00:53:35.183628   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:35.183636   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:35.183647   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:35.198020   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:35.198047   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:35.263495   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:35.263575   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:35.263596   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:35.347220   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:35.347252   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:35.385603   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:35.385629   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:37.943765   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:37.959330   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:37.959406   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:37.996577   71766 cri.go:89] found id: ""
	I0722 00:53:37.996608   71766 logs.go:276] 0 containers: []
	W0722 00:53:37.996619   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:37.996627   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:37.996700   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:38.029775   71766 cri.go:89] found id: ""
	I0722 00:53:38.029805   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.029815   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:38.029822   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:38.029884   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:38.061857   71766 cri.go:89] found id: ""
	I0722 00:53:38.061884   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.061893   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:38.061901   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:38.061960   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:38.094929   71766 cri.go:89] found id: ""
	I0722 00:53:38.094957   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.094968   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:38.094976   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:38.095039   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:38.126875   71766 cri.go:89] found id: ""
	I0722 00:53:38.126906   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.126918   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:38.126925   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:38.126985   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:38.159344   71766 cri.go:89] found id: ""
	I0722 00:53:38.159382   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.159393   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:38.159400   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:38.159460   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:35.267113   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:37.766847   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:36.881532   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:39.380188   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.380578   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.967755   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:40.967933   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:38.190794   71766 cri.go:89] found id: ""
	I0722 00:53:38.190826   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.190837   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:38.190844   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:38.190902   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:38.226247   71766 cri.go:89] found id: ""
	I0722 00:53:38.226270   71766 logs.go:276] 0 containers: []
	W0722 00:53:38.226279   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:38.226287   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:38.226308   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:38.279792   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:38.279833   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:38.293269   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:38.293303   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:38.356156   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:38.356182   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:38.356199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:38.435267   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:38.435300   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:40.976586   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:41.001504   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:41.001574   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:41.052085   71766 cri.go:89] found id: ""
	I0722 00:53:41.052108   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.052116   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:41.052121   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:41.052170   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:41.099417   71766 cri.go:89] found id: ""
	I0722 00:53:41.099446   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.099456   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:41.099464   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:41.099529   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:41.134982   71766 cri.go:89] found id: ""
	I0722 00:53:41.135009   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.135019   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:41.135026   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:41.135090   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:41.170517   71766 cri.go:89] found id: ""
	I0722 00:53:41.170546   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.170557   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:41.170564   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:41.170659   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:41.202618   71766 cri.go:89] found id: ""
	I0722 00:53:41.202648   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.202658   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:41.202665   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:41.202726   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:41.235355   71766 cri.go:89] found id: ""
	I0722 00:53:41.235388   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.235399   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:41.235406   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:41.235465   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:41.269925   71766 cri.go:89] found id: ""
	I0722 00:53:41.269951   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.269960   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:41.269967   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:41.270024   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:41.304453   71766 cri.go:89] found id: ""
	I0722 00:53:41.304480   71766 logs.go:276] 0 containers: []
	W0722 00:53:41.304491   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:41.304502   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:41.304517   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:41.357332   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:41.357373   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:41.370693   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:41.370721   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:41.440471   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:41.440509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:41.440525   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:41.519730   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:41.519769   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:39.767164   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:41.767350   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:43.380764   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.879955   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:42.968385   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:44.060538   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:44.074078   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:44.074139   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:44.106552   71766 cri.go:89] found id: ""
	I0722 00:53:44.106585   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.106595   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:44.106617   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:44.106681   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:44.139033   71766 cri.go:89] found id: ""
	I0722 00:53:44.139063   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.139073   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:44.139078   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:44.139127   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:44.172836   71766 cri.go:89] found id: ""
	I0722 00:53:44.172863   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.172874   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:44.172882   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:44.172935   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:44.204694   71766 cri.go:89] found id: ""
	I0722 00:53:44.204722   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.204730   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:44.204735   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:44.204794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:44.237301   71766 cri.go:89] found id: ""
	I0722 00:53:44.237329   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.237337   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:44.237343   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:44.237418   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:44.272315   71766 cri.go:89] found id: ""
	I0722 00:53:44.272341   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.272353   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:44.272360   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:44.272424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:44.305436   71766 cri.go:89] found id: ""
	I0722 00:53:44.305462   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.305470   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:44.305475   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:44.305526   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:44.336148   71766 cri.go:89] found id: ""
	I0722 00:53:44.336174   71766 logs.go:276] 0 containers: []
	W0722 00:53:44.336186   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:44.336195   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:44.336211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:44.348904   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:44.348932   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:44.424908   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:44.424931   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:44.424944   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:44.502082   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:44.502116   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:44.538366   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:44.538400   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.093414   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:47.107017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:47.107093   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:47.140036   71766 cri.go:89] found id: ""
	I0722 00:53:47.140063   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.140071   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:47.140076   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:47.140122   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:47.172685   71766 cri.go:89] found id: ""
	I0722 00:53:47.172710   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.172717   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:47.172723   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:47.172769   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:47.204244   71766 cri.go:89] found id: ""
	I0722 00:53:47.204278   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.204287   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:47.204293   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:47.204379   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:47.237209   71766 cri.go:89] found id: ""
	I0722 00:53:47.237234   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.237242   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:47.237247   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:47.237301   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:47.272019   71766 cri.go:89] found id: ""
	I0722 00:53:47.272048   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.272058   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:47.272067   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:47.272133   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:47.310014   71766 cri.go:89] found id: ""
	I0722 00:53:47.310043   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.310052   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:47.310060   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:47.310120   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:47.344457   71766 cri.go:89] found id: ""
	I0722 00:53:47.344479   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.344486   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:47.344492   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:47.344549   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:47.377258   71766 cri.go:89] found id: ""
	I0722 00:53:47.377285   71766 logs.go:276] 0 containers: []
	W0722 00:53:47.377295   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:47.377305   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:47.377318   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:47.430414   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:47.430455   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:47.443173   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:47.443199   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:47.512197   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:47.512218   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:47.512237   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:47.594318   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:47.594349   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:43.767439   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:45.767732   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.880295   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.381064   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:47.469180   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:49.968163   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.133612   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:50.147749   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:50.147824   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:50.183236   71766 cri.go:89] found id: ""
	I0722 00:53:50.183260   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.183268   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:50.183273   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:50.183340   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:50.221161   71766 cri.go:89] found id: ""
	I0722 00:53:50.221187   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.221195   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:50.221201   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:50.221261   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:50.252996   71766 cri.go:89] found id: ""
	I0722 00:53:50.253029   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.253039   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:50.253047   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:50.253107   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:50.290350   71766 cri.go:89] found id: ""
	I0722 00:53:50.290379   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.290391   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:50.290399   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:50.290461   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:50.323396   71766 cri.go:89] found id: ""
	I0722 00:53:50.323426   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.323438   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:50.323445   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:50.323503   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:50.357712   71766 cri.go:89] found id: ""
	I0722 00:53:50.357733   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.357741   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:50.357747   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:50.357794   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:50.391647   71766 cri.go:89] found id: ""
	I0722 00:53:50.391670   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.391678   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:50.391683   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:50.391730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:50.423013   71766 cri.go:89] found id: ""
	I0722 00:53:50.423042   71766 logs.go:276] 0 containers: []
	W0722 00:53:50.423054   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:50.423065   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:50.423102   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:50.476373   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:50.476403   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:50.490405   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:50.490432   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:50.568832   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:50.568855   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:50.568870   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:50.657761   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:50.657794   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:48.268342   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:50.268655   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.768088   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:52.880216   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:55.380026   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:51.968790   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:54.468217   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:56.468392   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:53.202175   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:53.216341   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:53.216419   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:53.249620   71766 cri.go:89] found id: ""
	I0722 00:53:53.249649   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.249658   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:53.249664   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:53.249727   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:53.283930   71766 cri.go:89] found id: ""
	I0722 00:53:53.283958   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.283968   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:53.283976   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:53.284029   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:53.315698   71766 cri.go:89] found id: ""
	I0722 00:53:53.315726   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.315736   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:53.315745   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:53.315804   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:53.350118   71766 cri.go:89] found id: ""
	I0722 00:53:53.350149   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.350173   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:53.350180   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:53.350255   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:53.384972   71766 cri.go:89] found id: ""
	I0722 00:53:53.385002   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.385011   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:53.385017   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:53.385070   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:53.417592   71766 cri.go:89] found id: ""
	I0722 00:53:53.417621   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.417630   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:53.417636   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:53.417684   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:53.449619   71766 cri.go:89] found id: ""
	I0722 00:53:53.449651   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.449664   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:53.449672   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:53.449735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:53.484970   71766 cri.go:89] found id: ""
	I0722 00:53:53.484996   71766 logs.go:276] 0 containers: []
	W0722 00:53:53.485006   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:53.485015   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:53.485031   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:53.498146   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:53.498183   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:53.564478   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:53.564519   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:53.564546   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:53.645619   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:53.645664   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:53.682894   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:53.682919   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.235216   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:56.247779   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:56.247843   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:56.283692   71766 cri.go:89] found id: ""
	I0722 00:53:56.283720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.283729   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:56.283736   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:56.283796   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:56.318901   71766 cri.go:89] found id: ""
	I0722 00:53:56.318926   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.318935   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:56.318940   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:56.318997   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:56.353254   71766 cri.go:89] found id: ""
	I0722 00:53:56.353279   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.353286   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:56.353292   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:56.353347   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:56.388189   71766 cri.go:89] found id: ""
	I0722 00:53:56.388212   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.388219   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:56.388224   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:56.388285   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:56.419694   71766 cri.go:89] found id: ""
	I0722 00:53:56.419720   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.419731   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:56.419741   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:56.419800   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:56.452652   71766 cri.go:89] found id: ""
	I0722 00:53:56.452674   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.452682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:56.452688   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:56.452742   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:56.486892   71766 cri.go:89] found id: ""
	I0722 00:53:56.486924   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.486937   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:56.486944   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:56.487015   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:56.519511   71766 cri.go:89] found id: ""
	I0722 00:53:56.519540   71766 logs.go:276] 0 containers: []
	W0722 00:53:56.519561   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:56.519571   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:56.519585   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:56.596061   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:56.596096   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:56.632348   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:56.632390   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:53:56.684760   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:56.684792   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:56.698499   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:56.698531   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:56.767690   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:55.268115   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.767505   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:57.880079   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.385042   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:58.469077   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:00.967753   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:53:59.268326   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:53:59.281623   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:53:59.281696   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:53:59.314418   71766 cri.go:89] found id: ""
	I0722 00:53:59.314441   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.314449   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:53:59.314459   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:53:59.314513   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:53:59.345235   71766 cri.go:89] found id: ""
	I0722 00:53:59.345267   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.345277   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:53:59.345286   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:53:59.345345   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:53:59.376966   71766 cri.go:89] found id: ""
	I0722 00:53:59.376997   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.377008   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:53:59.377015   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:53:59.377072   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:53:59.408627   71766 cri.go:89] found id: ""
	I0722 00:53:59.408660   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.408672   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:53:59.408680   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:53:59.408730   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:53:59.440372   71766 cri.go:89] found id: ""
	I0722 00:53:59.440401   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.440412   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:53:59.440419   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:53:59.440474   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:53:59.477553   71766 cri.go:89] found id: ""
	I0722 00:53:59.477583   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.477594   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:53:59.477610   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:53:59.477663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:53:59.513020   71766 cri.go:89] found id: ""
	I0722 00:53:59.513052   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.513060   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:53:59.513066   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:53:59.513115   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:53:59.544400   71766 cri.go:89] found id: ""
	I0722 00:53:59.544428   71766 logs.go:276] 0 containers: []
	W0722 00:53:59.544438   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:53:59.544448   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:53:59.544464   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:53:59.557237   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:53:59.557264   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:53:59.627742   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:53:59.627763   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:53:59.627777   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:53:59.706394   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:53:59.706433   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:53:59.745650   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:53:59.745681   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.297140   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:02.310660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:02.310735   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:02.348011   71766 cri.go:89] found id: ""
	I0722 00:54:02.348041   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.348052   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:02.348059   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:02.348118   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:02.384256   71766 cri.go:89] found id: ""
	I0722 00:54:02.384282   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.384291   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:02.384297   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:02.384355   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:02.419378   71766 cri.go:89] found id: ""
	I0722 00:54:02.419409   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.419420   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:02.419427   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:02.419492   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:02.452830   71766 cri.go:89] found id: ""
	I0722 00:54:02.452857   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.452868   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:02.452874   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:02.452939   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:02.486387   71766 cri.go:89] found id: ""
	I0722 00:54:02.486415   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.486427   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:02.486434   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:02.486500   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:02.518758   71766 cri.go:89] found id: ""
	I0722 00:54:02.518792   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.518803   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:02.518810   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:02.518868   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:02.554965   71766 cri.go:89] found id: ""
	I0722 00:54:02.554993   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.555002   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:02.555007   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:02.555054   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:02.593104   71766 cri.go:89] found id: ""
	I0722 00:54:02.593133   71766 logs.go:276] 0 containers: []
	W0722 00:54:02.593144   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:02.593154   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:02.593170   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:02.646677   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:02.646714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:02.660710   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:02.660746   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:02.741789   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:02.741810   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:02.741824   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:02.831476   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:02.831516   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:00.267099   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.768759   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.879898   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:04.880477   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:02.968620   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.468934   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:05.371820   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:05.385083   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:05.385142   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:05.418266   71766 cri.go:89] found id: ""
	I0722 00:54:05.418297   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.418307   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:05.418314   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:05.418373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:05.452943   71766 cri.go:89] found id: ""
	I0722 00:54:05.452976   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.452988   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:05.452996   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:05.453055   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:05.486004   71766 cri.go:89] found id: ""
	I0722 00:54:05.486036   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.486045   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:05.486052   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:05.486101   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:05.518207   71766 cri.go:89] found id: ""
	I0722 00:54:05.518237   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.518247   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:05.518254   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:05.518319   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:05.549553   71766 cri.go:89] found id: ""
	I0722 00:54:05.549578   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.549585   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:05.549592   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:05.549641   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:05.580924   71766 cri.go:89] found id: ""
	I0722 00:54:05.580951   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.580958   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:05.580964   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:05.581011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:05.617321   71766 cri.go:89] found id: ""
	I0722 00:54:05.617347   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.617357   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:05.617364   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:05.617479   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:05.649252   71766 cri.go:89] found id: ""
	I0722 00:54:05.649278   71766 logs.go:276] 0 containers: []
	W0722 00:54:05.649289   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:05.649299   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:05.649314   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:05.661980   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:05.662013   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:05.733477   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:05.733506   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:05.733522   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:05.817723   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:05.817758   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:05.855380   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:05.855406   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:05.267531   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.267727   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.380315   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:09.381289   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:07.968193   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:10.467628   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:08.409478   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:08.423229   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:08.423293   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:08.455809   71766 cri.go:89] found id: ""
	I0722 00:54:08.455841   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.455852   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:08.455860   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:08.455910   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:08.489523   71766 cri.go:89] found id: ""
	I0722 00:54:08.489552   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.489562   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:08.489569   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:08.489643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:08.521034   71766 cri.go:89] found id: ""
	I0722 00:54:08.521061   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.521068   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:08.521074   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:08.521126   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:08.559343   71766 cri.go:89] found id: ""
	I0722 00:54:08.559369   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.559380   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:08.559386   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:08.559447   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:08.594247   71766 cri.go:89] found id: ""
	I0722 00:54:08.594277   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.594285   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:08.594290   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:08.594343   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:08.626651   71766 cri.go:89] found id: ""
	I0722 00:54:08.626674   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.626682   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:08.626687   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:08.626739   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:08.660291   71766 cri.go:89] found id: ""
	I0722 00:54:08.660327   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.660337   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:08.660344   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:08.660407   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:08.692689   71766 cri.go:89] found id: ""
	I0722 00:54:08.692716   71766 logs.go:276] 0 containers: []
	W0722 00:54:08.692724   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:08.692732   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:08.692742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:08.745023   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:08.745061   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:08.758354   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:08.758391   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:08.823223   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:08.823246   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:08.823259   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:08.912959   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:08.913009   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:11.451961   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:11.464705   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:11.464773   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:11.498809   71766 cri.go:89] found id: ""
	I0722 00:54:11.498836   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.498846   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:11.498854   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:11.498917   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:11.530919   71766 cri.go:89] found id: ""
	I0722 00:54:11.530947   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.530957   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:11.530962   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:11.531027   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:11.566381   71766 cri.go:89] found id: ""
	I0722 00:54:11.566407   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.566417   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:11.566425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:11.566496   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:11.595960   71766 cri.go:89] found id: ""
	I0722 00:54:11.595981   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.595989   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:11.595994   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:11.596040   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:11.626994   71766 cri.go:89] found id: ""
	I0722 00:54:11.627024   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.627033   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:11.627038   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:11.627089   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:11.668340   71766 cri.go:89] found id: ""
	I0722 00:54:11.668375   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.668382   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:11.668387   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:11.668439   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:11.702527   71766 cri.go:89] found id: ""
	I0722 00:54:11.702557   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.702568   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:11.702577   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:11.702648   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:11.736613   71766 cri.go:89] found id: ""
	I0722 00:54:11.736639   71766 logs.go:276] 0 containers: []
	W0722 00:54:11.736650   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:11.736659   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:11.736673   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:11.794680   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:11.794714   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:11.808955   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:11.808983   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:11.873772   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:11.873796   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:11.873815   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:11.959183   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:11.959219   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:09.767906   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.266228   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:11.880056   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:13.880234   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.380266   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:12.468449   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.468940   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:14.499978   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:14.514820   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:14.514881   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:14.550328   71766 cri.go:89] found id: ""
	I0722 00:54:14.550356   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.550364   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:14.550370   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:14.550417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:14.583728   71766 cri.go:89] found id: ""
	I0722 00:54:14.583753   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.583761   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:14.583766   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:14.583818   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:14.617599   71766 cri.go:89] found id: ""
	I0722 00:54:14.617632   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.617639   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:14.617647   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:14.617701   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:14.651610   71766 cri.go:89] found id: ""
	I0722 00:54:14.651641   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.651653   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:14.651660   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:14.651719   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:14.686475   71766 cri.go:89] found id: ""
	I0722 00:54:14.686500   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.686510   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:14.686516   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:14.686577   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:14.719770   71766 cri.go:89] found id: ""
	I0722 00:54:14.719797   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.719807   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:14.719815   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:14.719876   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:14.755222   71766 cri.go:89] found id: ""
	I0722 00:54:14.755250   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.755259   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:14.755264   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:14.755322   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:14.787181   71766 cri.go:89] found id: ""
	I0722 00:54:14.787213   71766 logs.go:276] 0 containers: []
	W0722 00:54:14.787222   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:14.787232   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:14.787247   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:14.853389   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:14.853422   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:14.867115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:14.867144   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:14.939701   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:14.939720   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:14.939732   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:15.027704   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:15.027741   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:17.569694   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:17.582493   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:17.582552   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:17.613243   71766 cri.go:89] found id: ""
	I0722 00:54:17.613272   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.613283   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:17.613290   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:17.613352   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:17.646230   71766 cri.go:89] found id: ""
	I0722 00:54:17.646258   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.646268   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:17.646276   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:17.646337   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:17.678891   71766 cri.go:89] found id: ""
	I0722 00:54:17.678913   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.678921   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:17.678926   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:17.678974   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:17.715202   71766 cri.go:89] found id: ""
	I0722 00:54:17.715226   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.715233   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:17.715239   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:17.715289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:17.748219   71766 cri.go:89] found id: ""
	I0722 00:54:17.748248   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.748258   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:17.748265   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:17.748332   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:17.785957   71766 cri.go:89] found id: ""
	I0722 00:54:17.785987   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.785997   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:17.786005   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:17.786060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:17.818559   71766 cri.go:89] found id: ""
	I0722 00:54:17.818588   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.818596   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:17.818619   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:17.818677   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:17.851185   71766 cri.go:89] found id: ""
	I0722 00:54:17.851208   71766 logs.go:276] 0 containers: []
	W0722 00:54:17.851215   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:17.851223   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:17.851234   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:17.901949   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:17.901978   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:17.915023   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:17.915055   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:17.980878   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:17.980896   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:17.980910   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:18.062848   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:18.062886   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:14.266985   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.766496   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.380364   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.380800   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:16.968677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:18.969191   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:21.468563   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.601554   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:20.614046   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:20.614140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:20.646913   71766 cri.go:89] found id: ""
	I0722 00:54:20.646938   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.646947   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:20.646954   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:20.647011   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:20.680012   71766 cri.go:89] found id: ""
	I0722 00:54:20.680044   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.680056   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:20.680063   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:20.680129   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:20.713769   71766 cri.go:89] found id: ""
	I0722 00:54:20.713796   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.713803   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:20.713809   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:20.713871   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:20.745504   71766 cri.go:89] found id: ""
	I0722 00:54:20.745536   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.745547   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:20.745565   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:20.745632   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:20.780353   71766 cri.go:89] found id: ""
	I0722 00:54:20.780380   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.780390   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:20.780396   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:20.780470   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:20.812854   71766 cri.go:89] found id: ""
	I0722 00:54:20.812877   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.812884   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:20.812890   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:20.812953   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:20.848881   71766 cri.go:89] found id: ""
	I0722 00:54:20.848906   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.848915   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:20.848920   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:20.848982   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:20.881709   71766 cri.go:89] found id: ""
	I0722 00:54:20.881737   71766 logs.go:276] 0 containers: []
	W0722 00:54:20.881743   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:20.881751   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:20.881761   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:20.933479   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:20.933514   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:20.947115   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:20.947140   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:21.019531   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:21.019554   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:21.019578   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:21.100388   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:21.100435   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:18.767810   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:20.768050   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:22.880227   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:24.880383   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.469402   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.969026   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:23.638646   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:23.651324   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:23.651393   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:23.683844   71766 cri.go:89] found id: ""
	I0722 00:54:23.683876   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.683887   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:23.683893   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:23.683943   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:23.719561   71766 cri.go:89] found id: ""
	I0722 00:54:23.719591   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.719602   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:23.719609   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:23.719669   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:23.751866   71766 cri.go:89] found id: ""
	I0722 00:54:23.751889   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.751897   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:23.751903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:23.751961   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:23.786325   71766 cri.go:89] found id: ""
	I0722 00:54:23.786353   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.786369   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:23.786374   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:23.786424   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:23.817778   71766 cri.go:89] found id: ""
	I0722 00:54:23.817806   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.817814   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:23.817819   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:23.817877   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:23.850983   71766 cri.go:89] found id: ""
	I0722 00:54:23.851012   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.851021   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:23.851029   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:23.851096   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:23.884786   71766 cri.go:89] found id: ""
	I0722 00:54:23.884817   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.884827   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:23.884833   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:23.884886   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:23.917148   71766 cri.go:89] found id: ""
	I0722 00:54:23.917177   71766 logs.go:276] 0 containers: []
	W0722 00:54:23.917187   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:23.917197   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:23.917211   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:23.972250   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:23.972280   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:23.985585   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:23.985610   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:24.053293   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:24.053315   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:24.053326   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:24.130844   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:24.130881   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:26.669432   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:26.681903   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:26.681978   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:26.718314   71766 cri.go:89] found id: ""
	I0722 00:54:26.718348   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.718359   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:26.718366   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:26.718438   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:26.751475   71766 cri.go:89] found id: ""
	I0722 00:54:26.751499   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.751508   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:26.751513   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:26.751560   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:26.787340   71766 cri.go:89] found id: ""
	I0722 00:54:26.787364   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.787372   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:26.787377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:26.787428   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:26.822094   71766 cri.go:89] found id: ""
	I0722 00:54:26.822124   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.822136   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:26.822143   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:26.822206   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:26.855208   71766 cri.go:89] found id: ""
	I0722 00:54:26.855232   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.855243   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:26.855251   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:26.855314   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:26.887817   71766 cri.go:89] found id: ""
	I0722 00:54:26.887842   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.887852   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:26.887863   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:26.887926   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:26.921224   71766 cri.go:89] found id: ""
	I0722 00:54:26.921254   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.921266   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:26.921273   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:26.921341   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:26.972407   71766 cri.go:89] found id: ""
	I0722 00:54:26.972432   71766 logs.go:276] 0 containers: []
	W0722 00:54:26.972441   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:26.972451   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:26.972466   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:27.024894   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:27.024929   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:27.046807   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:27.046838   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:27.116261   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:27.116284   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:27.116298   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:27.200625   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:27.200660   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:23.266119   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:25.266484   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:27.269071   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:26.880904   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.381269   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:28.467984   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:30.472670   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:29.739274   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:29.755075   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:29.755152   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:29.797317   71766 cri.go:89] found id: ""
	I0722 00:54:29.797341   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.797349   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:29.797360   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:29.797417   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:29.833416   71766 cri.go:89] found id: ""
	I0722 00:54:29.833436   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.833444   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:29.833449   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:29.833504   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:29.872018   71766 cri.go:89] found id: ""
	I0722 00:54:29.872053   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.872063   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:29.872070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:29.872138   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:29.908720   71766 cri.go:89] found id: ""
	I0722 00:54:29.908751   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.908763   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:29.908771   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:29.908821   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:29.942034   71766 cri.go:89] found id: ""
	I0722 00:54:29.942056   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.942064   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:29.942070   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:29.942116   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:29.975198   71766 cri.go:89] found id: ""
	I0722 00:54:29.975220   71766 logs.go:276] 0 containers: []
	W0722 00:54:29.975228   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:29.975233   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:29.975289   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:30.006965   71766 cri.go:89] found id: ""
	I0722 00:54:30.006995   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.007004   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:30.007009   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:30.007060   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:30.040691   71766 cri.go:89] found id: ""
	I0722 00:54:30.040713   71766 logs.go:276] 0 containers: []
	W0722 00:54:30.040722   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:30.040729   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:30.040742   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:30.079030   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:30.079072   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:30.130039   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:30.130069   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:30.142882   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:30.142912   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:30.216570   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:30.216586   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:30.216599   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:32.802669   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:32.816928   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:32.816996   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:32.851272   71766 cri.go:89] found id: ""
	I0722 00:54:32.851295   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.851304   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:32.851309   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:32.851373   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:32.884476   71766 cri.go:89] found id: ""
	I0722 00:54:32.884506   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.884514   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:32.884519   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:32.884564   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:32.919658   71766 cri.go:89] found id: ""
	I0722 00:54:32.919686   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.919697   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:32.919703   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:32.919761   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:32.954727   71766 cri.go:89] found id: ""
	I0722 00:54:32.954755   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.954765   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:32.954772   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:32.954832   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:32.988968   71766 cri.go:89] found id: ""
	I0722 00:54:32.988998   71766 logs.go:276] 0 containers: []
	W0722 00:54:32.989009   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:32.989016   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:32.989140   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:33.022766   71766 cri.go:89] found id: ""
	I0722 00:54:33.022795   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.022805   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:33.022813   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:33.022873   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:33.062994   71766 cri.go:89] found id: ""
	I0722 00:54:33.063022   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.063029   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:33.063035   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:33.063082   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:33.096788   71766 cri.go:89] found id: ""
	I0722 00:54:33.096821   71766 logs.go:276] 0 containers: []
	W0722 00:54:33.096833   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:33.096845   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:33.096862   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:33.153123   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:33.153159   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:33.169366   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:33.169392   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:54:29.269943   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.767451   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:31.879943   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:33.880014   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:35.881323   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:32.968047   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:34.968770   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	W0722 00:54:33.233302   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:33.233330   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:33.233347   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:33.322923   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:33.322960   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:35.864726   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:35.877957   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:54:35.878037   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:54:35.915134   71766 cri.go:89] found id: ""
	I0722 00:54:35.915162   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.915194   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:54:35.915201   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:54:35.915260   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:54:35.951633   71766 cri.go:89] found id: ""
	I0722 00:54:35.951662   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.951672   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:54:35.951678   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:54:35.951738   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:54:35.983606   71766 cri.go:89] found id: ""
	I0722 00:54:35.983628   71766 logs.go:276] 0 containers: []
	W0722 00:54:35.983636   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:54:35.983641   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:54:35.983691   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:54:36.016559   71766 cri.go:89] found id: ""
	I0722 00:54:36.016581   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.016589   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:54:36.016594   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:54:36.016663   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:54:36.050329   71766 cri.go:89] found id: ""
	I0722 00:54:36.050355   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.050366   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:54:36.050373   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:54:36.050425   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:54:36.081831   71766 cri.go:89] found id: ""
	I0722 00:54:36.081870   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.081888   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:54:36.081896   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:54:36.081964   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:54:36.114708   71766 cri.go:89] found id: ""
	I0722 00:54:36.114731   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.114738   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:54:36.114744   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:54:36.114791   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:54:36.146728   71766 cri.go:89] found id: ""
	I0722 00:54:36.146757   71766 logs.go:276] 0 containers: []
	W0722 00:54:36.146768   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:54:36.146779   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:54:36.146797   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:54:36.198630   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:54:36.198674   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:54:36.214029   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:54:36.214057   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:54:36.280091   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:54:36.280118   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:54:36.280132   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:54:36.354677   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:54:36.354711   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:54:34.265900   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.266983   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.379941   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.880391   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:36.969091   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:39.468441   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:38.895805   71766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:54:38.909259   71766 kubeadm.go:597] duration metric: took 4m4.578600812s to restartPrimaryControlPlane
	W0722 00:54:38.909427   71766 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:38.909476   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:38.267120   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:40.267188   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:42.766839   71396 pod_ready.go:102] pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:43.602197   71766 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.692696415s)
	I0722 00:54:43.602281   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:54:43.617085   71766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:54:43.626977   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:54:43.636815   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:54:43.636842   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:54:43.636897   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:54:43.645420   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:54:43.645487   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:54:43.654370   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:54:43.662646   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:54:43.662702   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:54:43.671920   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.682142   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:54:43.682192   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:54:43.691352   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:54:43.699972   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:54:43.700020   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:54:43.709809   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:54:43.779085   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:54:43.779148   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:54:43.918858   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:54:43.918977   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:54:43.919066   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:54:44.082464   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:54:44.084298   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:54:44.084391   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:54:44.084478   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:54:44.084584   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:54:44.084672   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:54:44.084761   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:54:44.084825   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:54:44.085019   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:54:44.085481   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:54:44.085802   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:54:44.086215   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:54:44.086294   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:54:44.086376   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:54:44.273024   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:54:44.649095   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:54:45.082411   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:54:45.464402   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:54:45.478948   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:54:45.480058   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:54:45.480113   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:54:45.613502   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:54:43.380663   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.880255   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:41.968299   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:44.469324   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:45.615062   71766 out.go:204]   - Booting up control plane ...
	I0722 00:54:45.615200   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:54:45.626599   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:54:45.627529   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:54:45.628247   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:54:45.630321   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:54:44.761051   71396 pod_ready.go:81] duration metric: took 4m0.00034s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" ...
	E0722 00:54:44.761084   71396 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-k5q49" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:54:44.761103   71396 pod_ready.go:38] duration metric: took 4m14.405180834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:54:44.761136   71396 kubeadm.go:597] duration metric: took 4m21.702075452s to restartPrimaryControlPlane
	W0722 00:54:44.761226   71396 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:54:44.761257   71396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:54:48.380043   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:50.880643   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:46.968935   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:49.468435   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:51.468787   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.380550   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:55.880249   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:53.967677   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:56.468835   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:57.880415   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.380788   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:54:58.967489   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:00.967914   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.879384   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:04.880076   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:02.968410   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:05.467632   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:10.965462   71396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.204182419s)
	I0722 00:55:10.965551   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:10.997604   71396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:55:11.013241   71396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:55:11.027423   71396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:55:11.027442   71396 kubeadm.go:157] found existing configuration files:
	
	I0722 00:55:11.027502   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:55:11.039491   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:55:11.039568   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:55:11.051842   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:55:11.061183   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:55:11.061240   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:55:11.079403   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.087840   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:55:11.087895   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:55:11.097068   71396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:55:11.105864   71396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:55:11.105920   71396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:55:11.114736   71396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:55:11.158062   71396 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0722 00:55:11.158192   71396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:55:11.267407   71396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:55:11.267534   71396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:55:11.267670   71396 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0722 00:55:11.274766   71396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:55:07.380057   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.879379   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:07.468808   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:09.967871   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:11.276687   71396 out.go:204]   - Generating certificates and keys ...
	I0722 00:55:11.276787   71396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:55:11.276885   71396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:55:11.277009   71396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:55:11.277116   71396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:55:11.277244   71396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:55:11.277319   71396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:55:11.277412   71396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:55:11.277500   71396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:55:11.277610   71396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:55:11.277732   71396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:55:11.277776   71396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:55:11.277850   71396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:55:12.013724   71396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:55:12.426588   71396 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:55:12.741623   71396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:55:12.850325   71396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:55:13.105818   71396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:55:13.107032   71396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:55:13.111099   71396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:55:13.113653   71396 out.go:204]   - Booting up control plane ...
	I0722 00:55:13.113784   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:55:13.113882   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:55:13.113969   71396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:55:13.131701   71396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:55:13.138774   71396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:55:13.138920   71396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:55:11.879765   72069 pod_ready.go:102] pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.380046   72069 pod_ready.go:81] duration metric: took 4m0.006066291s for pod "metrics-server-569cc877fc-k68zp" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:13.380067   72069 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0722 00:55:13.380074   72069 pod_ready.go:38] duration metric: took 4m4.051469592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:13.380088   72069 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:13.380113   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:13.380156   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:13.428554   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.428579   72069 cri.go:89] found id: ""
	I0722 00:55:13.428590   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:13.428660   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.432975   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:13.433049   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:13.471340   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:13.471369   72069 cri.go:89] found id: ""
	I0722 00:55:13.471377   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:13.471435   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.475657   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:13.475721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:13.519128   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.519150   72069 cri.go:89] found id: ""
	I0722 00:55:13.519162   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:13.519218   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.522906   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:13.522971   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:13.557162   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.557183   72069 cri.go:89] found id: ""
	I0722 00:55:13.557190   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:13.557248   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.561058   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:13.561125   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:13.594436   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:13.594459   72069 cri.go:89] found id: ""
	I0722 00:55:13.594467   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:13.594520   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.598533   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:13.598633   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:13.638516   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:13.638535   72069 cri.go:89] found id: ""
	I0722 00:55:13.638542   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:13.638592   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.642408   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:13.642455   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:13.679920   72069 cri.go:89] found id: ""
	I0722 00:55:13.679946   72069 logs.go:276] 0 containers: []
	W0722 00:55:13.679952   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:13.679958   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:13.680005   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:13.713105   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.713130   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:13.713135   72069 cri.go:89] found id: ""
	I0722 00:55:13.713144   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:13.713194   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.717649   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:13.721157   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:13.721176   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:13.761998   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:13.762026   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:13.816759   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:13.816792   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:13.831415   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:13.831447   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:13.889267   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:13.889314   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:13.926050   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:13.926084   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:13.964709   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:13.964755   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:14.000589   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:14.000629   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:14.046791   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:14.046819   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:14.531722   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:14.531767   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:14.593888   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:14.593935   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:14.738836   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:14.738865   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:14.783390   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:14.783430   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:11.968442   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:14.469492   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:13.267658   71396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:55:13.267806   71396 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:55:14.269137   71396 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001382215s
	I0722 00:55:14.269249   71396 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:55:19.272729   71396 kubeadm.go:310] [api-check] The API server is healthy after 5.001619742s
	I0722 00:55:19.284039   71396 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:55:19.301504   71396 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:55:19.336655   71396 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:55:19.336943   71396 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-945581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:55:19.348637   71396 kubeadm.go:310] [bootstrap-token] Using token: 9e6gcb.gkxqsytc0123rjml
	I0722 00:55:19.349891   71396 out.go:204]   - Configuring RBAC rules ...
	I0722 00:55:19.350061   71396 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:55:19.359962   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:55:19.368413   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:55:19.372267   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:55:19.376336   71396 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:55:19.379705   71396 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:55:19.677713   71396 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:55:20.124051   71396 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:55:20.678242   71396 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:55:20.679733   71396 kubeadm.go:310] 
	I0722 00:55:20.679796   71396 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:55:20.679804   71396 kubeadm.go:310] 
	I0722 00:55:20.679923   71396 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:55:20.679941   71396 kubeadm.go:310] 
	I0722 00:55:20.679976   71396 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:55:20.680059   71396 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:55:20.680137   71396 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:55:20.680152   71396 kubeadm.go:310] 
	I0722 00:55:20.680220   71396 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:55:20.680230   71396 kubeadm.go:310] 
	I0722 00:55:20.680269   71396 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:55:20.680278   71396 kubeadm.go:310] 
	I0722 00:55:20.680324   71396 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:55:20.680391   71396 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:55:20.680486   71396 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:55:20.680500   71396 kubeadm.go:310] 
	I0722 00:55:20.680618   71396 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:55:20.680752   71396 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:55:20.680765   71396 kubeadm.go:310] 
	I0722 00:55:20.680835   71396 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.680970   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:55:20.681004   71396 kubeadm.go:310] 	--control-plane 
	I0722 00:55:20.681012   71396 kubeadm.go:310] 
	I0722 00:55:20.681135   71396 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:55:20.681145   71396 kubeadm.go:310] 
	I0722 00:55:20.681231   71396 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9e6gcb.gkxqsytc0123rjml \
	I0722 00:55:20.681377   71396 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:55:20.683323   71396 kubeadm.go:310] W0722 00:55:11.131256    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683628   71396 kubeadm.go:310] W0722 00:55:11.132014    2882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0722 00:55:20.683724   71396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:55:20.683749   71396 cni.go:84] Creating CNI manager for ""
	I0722 00:55:20.683758   71396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:55:20.686246   71396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:55:17.326468   72069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:17.343789   72069 api_server.go:72] duration metric: took 4m15.73034313s to wait for apiserver process to appear ...
	I0722 00:55:17.343819   72069 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:17.343860   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:17.343924   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:17.382195   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:17.382224   72069 cri.go:89] found id: ""
	I0722 00:55:17.382234   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:17.382306   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.386922   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:17.386998   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:17.433391   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:17.433420   72069 cri.go:89] found id: ""
	I0722 00:55:17.433430   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:17.433489   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.438300   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:17.438369   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:17.483215   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:17.483270   72069 cri.go:89] found id: ""
	I0722 00:55:17.483281   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:17.483334   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.488146   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:17.488219   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:17.526507   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:17.526530   72069 cri.go:89] found id: ""
	I0722 00:55:17.526538   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:17.526589   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.530650   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:17.530721   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:17.573794   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.573821   72069 cri.go:89] found id: ""
	I0722 00:55:17.573831   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:17.573894   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.578101   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:17.578180   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:17.619233   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.619262   72069 cri.go:89] found id: ""
	I0722 00:55:17.619272   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:17.619333   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.623410   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:17.623483   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:17.660310   72069 cri.go:89] found id: ""
	I0722 00:55:17.660336   72069 logs.go:276] 0 containers: []
	W0722 00:55:17.660348   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:17.660355   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:17.660424   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:17.694512   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:17.694539   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.694546   72069 cri.go:89] found id: ""
	I0722 00:55:17.694554   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:17.694630   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.698953   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:17.702750   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:17.702774   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:17.758798   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:17.758828   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:17.805596   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:17.805628   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:17.819507   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:17.819534   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:17.943432   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:17.943462   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:17.980146   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:17.980184   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:18.023530   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:18.023560   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:18.060312   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:18.060349   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:18.097669   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:18.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:18.530884   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:18.530918   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:18.579946   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:18.579980   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:18.636228   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:18.636262   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:18.685202   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:18.685244   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.239747   72069 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I0722 00:55:21.244126   72069 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I0722 00:55:21.245031   72069 api_server.go:141] control plane version: v1.30.3
	I0722 00:55:21.245050   72069 api_server.go:131] duration metric: took 3.901224078s to wait for apiserver health ...
	I0722 00:55:21.245057   72069 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:21.245076   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:55:21.245134   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:55:21.288786   72069 cri.go:89] found id: "62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.288814   72069 cri.go:89] found id: ""
	I0722 00:55:21.288824   72069 logs.go:276] 1 containers: [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e]
	I0722 00:55:21.288885   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.293145   72069 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:55:21.293202   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:55:21.332455   72069 cri.go:89] found id: "a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.332480   72069 cri.go:89] found id: ""
	I0722 00:55:21.332488   72069 logs.go:276] 1 containers: [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24]
	I0722 00:55:21.332548   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.336338   72069 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:55:21.336409   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:55:21.370820   72069 cri.go:89] found id: "93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:21.370842   72069 cri.go:89] found id: ""
	I0722 00:55:21.370851   72069 logs.go:276] 1 containers: [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc]
	I0722 00:55:21.370906   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.374995   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:55:21.375064   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:55:16.969963   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:19.469286   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:21.469397   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:20.687467   71396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:55:20.699834   71396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:55:20.718921   71396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:55:20.719067   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:20.719156   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-945581 minikube.k8s.io/updated_at=2024_07_22T00_55_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=no-preload-945581 minikube.k8s.io/primary=true
	I0722 00:55:20.946819   71396 ops.go:34] apiserver oom_adj: -16
	I0722 00:55:20.948116   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.448199   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.949130   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.448962   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:22.948929   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:21.409283   72069 cri.go:89] found id: "deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:21.409309   72069 cri.go:89] found id: ""
	I0722 00:55:21.409319   72069 logs.go:276] 1 containers: [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e]
	I0722 00:55:21.409380   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.413201   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:55:21.413257   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:55:21.447229   72069 cri.go:89] found id: "fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.447255   72069 cri.go:89] found id: ""
	I0722 00:55:21.447264   72069 logs.go:276] 1 containers: [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a]
	I0722 00:55:21.447326   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.451185   72069 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:55:21.451247   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:55:21.489294   72069 cri.go:89] found id: "193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.489320   72069 cri.go:89] found id: ""
	I0722 00:55:21.489330   72069 logs.go:276] 1 containers: [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a]
	I0722 00:55:21.489399   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.493428   72069 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:55:21.493487   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:55:21.530111   72069 cri.go:89] found id: ""
	I0722 00:55:21.530144   72069 logs.go:276] 0 containers: []
	W0722 00:55:21.530154   72069 logs.go:278] No container was found matching "kindnet"
	I0722 00:55:21.530162   72069 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0722 00:55:21.530224   72069 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0722 00:55:21.571293   72069 cri.go:89] found id: "d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:21.571315   72069 cri.go:89] found id: "8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.571322   72069 cri.go:89] found id: ""
	I0722 00:55:21.571330   72069 logs.go:276] 2 containers: [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397]
	I0722 00:55:21.571401   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.575584   72069 ssh_runner.go:195] Run: which crictl
	I0722 00:55:21.579520   72069 logs.go:123] Gathering logs for dmesg ...
	I0722 00:55:21.579541   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:55:21.592967   72069 logs.go:123] Gathering logs for kube-proxy [fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a] ...
	I0722 00:55:21.592997   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4ac4f1206a695924f6d9a3d33841435edb30c09a789ed1e48c2215f7684c9a"
	I0722 00:55:21.630169   72069 logs.go:123] Gathering logs for kube-controller-manager [193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a] ...
	I0722 00:55:21.630196   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 193fb390e4d473356d47e2b28ba4b892cd676b5d12016bae37548ca6a1b0c39a"
	I0722 00:55:21.681610   72069 logs.go:123] Gathering logs for storage-provisioner [8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397] ...
	I0722 00:55:21.681647   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8efc9587f83d69a6919ea5df3311fd64b8136888d311381f300e6904f68a4397"
	I0722 00:55:21.716935   72069 logs.go:123] Gathering logs for kubelet ...
	I0722 00:55:21.716964   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:55:21.776484   72069 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:55:21.776520   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 00:55:21.888514   72069 logs.go:123] Gathering logs for kube-apiserver [62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e] ...
	I0722 00:55:21.888549   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62e46b9a1718a375b218f2b3e03d631ca9e902d49b65841c526743f0b444fd5e"
	I0722 00:55:21.941849   72069 logs.go:123] Gathering logs for etcd [a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24] ...
	I0722 00:55:21.941881   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6a52deb009602ef809b517172ac82e63c13d85ed1018eec8e1f90ef9328be24"
	I0722 00:55:21.983259   72069 logs.go:123] Gathering logs for coredns [93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc] ...
	I0722 00:55:21.983292   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93b990e487bfd139634b1ef5c835d70024073b95592a4ceda23fc86ea7ca7bbc"
	I0722 00:55:22.017043   72069 logs.go:123] Gathering logs for kube-scheduler [deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e] ...
	I0722 00:55:22.017072   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deb1a27ba85472e6e278d31e11f6519456dcb1f4eab7f9082d3f6fc430dff43e"
	I0722 00:55:22.055690   72069 logs.go:123] Gathering logs for storage-provisioner [d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23] ...
	I0722 00:55:22.055716   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e399257c6a0891109c0b0ccb5b1cdec77faf26cee586f630a620b2ca6dff23"
	I0722 00:55:22.097686   72069 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:55:22.097714   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0722 00:55:22.469522   72069 logs.go:123] Gathering logs for container status ...
	I0722 00:55:22.469558   72069 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:55:25.028395   72069 system_pods.go:59] 8 kube-system pods found
	I0722 00:55:25.028427   72069 system_pods.go:61] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.028432   72069 system_pods.go:61] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.028436   72069 system_pods.go:61] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.028440   72069 system_pods.go:61] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.028443   72069 system_pods.go:61] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.028447   72069 system_pods.go:61] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.028454   72069 system_pods.go:61] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.028458   72069 system_pods.go:61] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.028464   72069 system_pods.go:74] duration metric: took 3.783402799s to wait for pod list to return data ...
	I0722 00:55:25.028472   72069 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:25.030505   72069 default_sa.go:45] found service account: "default"
	I0722 00:55:25.030533   72069 default_sa.go:55] duration metric: took 2.054427ms for default service account to be created ...
	I0722 00:55:25.030543   72069 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:25.035754   72069 system_pods.go:86] 8 kube-system pods found
	I0722 00:55:25.035783   72069 system_pods.go:89] "coredns-7db6d8ff4d-7mzsv" [48d43245-3f6c-4d8b-bffa-bc8298b65025] Running
	I0722 00:55:25.035791   72069 system_pods.go:89] "etcd-embed-certs-360389" [b7e50e68-ad82-4bea-889c-2cca33bec902] Running
	I0722 00:55:25.035797   72069 system_pods.go:89] "kube-apiserver-embed-certs-360389" [eb94246d-a1af-429b-9df1-ac87b6890b96] Running
	I0722 00:55:25.035801   72069 system_pods.go:89] "kube-controller-manager-embed-certs-360389" [430c71ef-d653-4151-abaa-688a34eff652] Running
	I0722 00:55:25.035806   72069 system_pods.go:89] "kube-proxy-8j7bx" [167c03f0-5b03-433a-951c-229baa23eb02] Running
	I0722 00:55:25.035812   72069 system_pods.go:89] "kube-scheduler-embed-certs-360389" [a2961b7d-e9e2-447a-812a-baf091c4a4e7] Running
	I0722 00:55:25.035823   72069 system_pods.go:89] "metrics-server-569cc877fc-k68zp" [9d851e83-b647-4e9e-a098-45c8b9d10323] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:25.035831   72069 system_pods.go:89] "storage-provisioner" [8c76b619-6b7f-45b0-93c2-df9879affe57] Running
	I0722 00:55:25.035840   72069 system_pods.go:126] duration metric: took 5.290732ms to wait for k8s-apps to be running ...
	I0722 00:55:25.035849   72069 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:25.035895   72069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:25.051215   72069 system_svc.go:56] duration metric: took 15.356281ms WaitForService to wait for kubelet
	I0722 00:55:25.051276   72069 kubeadm.go:582] duration metric: took 4m23.437832981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:25.051311   72069 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:25.054726   72069 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:25.054752   72069 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:25.054765   72069 node_conditions.go:105] duration metric: took 3.446848ms to run NodePressure ...
	I0722 00:55:25.054778   72069 start.go:241] waiting for startup goroutines ...
	I0722 00:55:25.054788   72069 start.go:246] waiting for cluster config update ...
	I0722 00:55:25.054801   72069 start.go:255] writing updated cluster config ...
	I0722 00:55:25.055086   72069 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:25.116027   72069 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:55:25.117549   72069 out.go:177] * Done! kubectl is now configured to use "embed-certs-360389" cluster and "default" namespace by default
	I0722 00:55:23.448829   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:23.949079   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.449145   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:24.949134   71396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:55:25.128492   71396 kubeadm.go:1113] duration metric: took 4.409469326s to wait for elevateKubeSystemPrivileges
	I0722 00:55:25.128522   71396 kubeadm.go:394] duration metric: took 5m2.117777857s to StartCluster
	I0722 00:55:25.128542   71396 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.128617   71396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:55:25.131861   71396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:55:25.132125   71396 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:55:25.132199   71396 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:55:25.132379   71396 addons.go:69] Setting storage-provisioner=true in profile "no-preload-945581"
	I0722 00:55:25.132388   71396 addons.go:69] Setting default-storageclass=true in profile "no-preload-945581"
	I0722 00:55:25.132406   71396 addons.go:234] Setting addon storage-provisioner=true in "no-preload-945581"
	W0722 00:55:25.132414   71396 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:55:25.132420   71396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-945581"
	I0722 00:55:25.132448   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.132457   71396 addons.go:69] Setting metrics-server=true in profile "no-preload-945581"
	I0722 00:55:25.132479   71396 config.go:182] Loaded profile config "no-preload-945581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0722 00:55:25.132494   71396 addons.go:234] Setting addon metrics-server=true in "no-preload-945581"
	W0722 00:55:25.132505   71396 addons.go:243] addon metrics-server should already be in state true
	I0722 00:55:25.132821   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.133070   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133105   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133149   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133183   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133184   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.133472   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.133720   71396 out.go:177] * Verifying Kubernetes components...
	I0722 00:55:25.135029   71396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:55:25.152383   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0722 00:55:25.152445   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0722 00:55:25.152870   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.152872   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.153413   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153444   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.153469   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153470   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.153895   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.153905   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.154232   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.154464   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.154492   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.158204   71396 addons.go:234] Setting addon default-storageclass=true in "no-preload-945581"
	W0722 00:55:25.158225   71396 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:55:25.158253   71396 host.go:66] Checking if "no-preload-945581" exists ...
	I0722 00:55:25.158591   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.158760   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.166288   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0722 00:55:25.166696   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.167295   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.167306   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.170758   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.171324   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.171348   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.173560   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0722 00:55:25.173987   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.174523   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.174539   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.174860   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.175081   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.176781   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.178724   71396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:55:25.179884   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:55:25.179903   71396 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:55:25.179919   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.181493   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0722 00:55:25.182098   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.182718   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.182733   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.182860   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183198   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.183330   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.183342   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.183727   71396 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:55:25.183741   71396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:55:25.183891   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.184075   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.184230   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.184432   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.187822   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0722 00:55:25.188203   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.188726   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.188742   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.189119   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.189438   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.191017   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.192912   71396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:55:25.194050   71396 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.194071   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:55:25.194088   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.199881   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200317   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.200348   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.200562   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.200733   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.200893   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.201015   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.202285   71396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0722 00:55:25.202834   71396 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:55:25.203361   71396 main.go:141] libmachine: Using API Version  1
	I0722 00:55:25.203384   71396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:55:25.204083   71396 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:55:25.204303   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetState
	I0722 00:55:25.206142   71396 main.go:141] libmachine: (no-preload-945581) Calling .DriverName
	I0722 00:55:25.206352   71396 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.206369   71396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:55:25.206387   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHHostname
	I0722 00:55:25.209377   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210705   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHPort
	I0722 00:55:25.210707   71396 main.go:141] libmachine: (no-preload-945581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:d4:7d", ip: ""} in network mk-no-preload-945581: {Iface:virbr2 ExpiryTime:2024-07-22 01:49:58 +0000 UTC Type:0 Mac:52:54:00:2e:d4:7d Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-945581 Clientid:01:52:54:00:2e:d4:7d}
	I0722 00:55:25.210740   71396 main.go:141] libmachine: (no-preload-945581) DBG | domain no-preload-945581 has defined IP address 192.168.50.251 and MAC address 52:54:00:2e:d4:7d in network mk-no-preload-945581
	I0722 00:55:25.210960   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHKeyPath
	I0722 00:55:25.211123   71396 main.go:141] libmachine: (no-preload-945581) Calling .GetSSHUsername
	I0722 00:55:25.211248   71396 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/no-preload-945581/id_rsa Username:docker}
	I0722 00:55:25.333251   71396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:55:25.365998   71396 node_ready.go:35] waiting up to 6m0s for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378559   71396 node_ready.go:49] node "no-preload-945581" has status "Ready":"True"
	I0722 00:55:25.378584   71396 node_ready.go:38] duration metric: took 12.552825ms for node "no-preload-945581" to be "Ready" ...
	I0722 00:55:25.378599   71396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:25.384264   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:25.455470   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:55:25.455496   71396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:55:25.474831   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:55:25.503642   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:55:25.506218   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:55:25.506239   71396 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:55:25.539602   71396 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:25.539632   71396 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:55:25.614686   71396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:55:26.122237   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122271   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122313   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122343   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122695   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122700   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.122710   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122714   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.122721   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122747   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.122725   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.122806   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.124540   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125781   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.125845   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125869   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.125894   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.125956   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.161421   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.161449   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.161772   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.161789   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.307902   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.307928   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308198   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308226   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308241   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308258   71396 main.go:141] libmachine: Making call to close driver server
	I0722 00:55:26.308267   71396 main.go:141] libmachine: (no-preload-945581) Calling .Close
	I0722 00:55:26.308531   71396 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:55:26.308600   71396 main.go:141] libmachine: (no-preload-945581) DBG | Closing plugin on server side
	I0722 00:55:26.308624   71396 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:55:26.308642   71396 addons.go:475] Verifying addon metrics-server=true in "no-preload-945581"
	I0722 00:55:26.310330   71396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:55:23.968358   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.969024   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:25.631575   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:55:25.632092   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:25.632299   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:26.311753   71396 addons.go:510] duration metric: took 1.179586106s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:55:27.390974   71396 pod_ready.go:102] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:28.468948   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.469200   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:30.632735   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:30.632946   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:55:29.390868   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:29.390900   71396 pod_ready.go:81] duration metric: took 4.006606542s for pod "coredns-5cfdc65f69-68wll" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:29.390913   71396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.396999   71396 pod_ready.go:92] pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:30.397020   71396 pod_ready.go:81] duration metric: took 1.006099367s for pod "coredns-5cfdc65f69-9j27w" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:30.397029   71396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:32.403722   71396 pod_ready.go:102] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:33.905060   71396 pod_ready.go:92] pod "etcd-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.905082   71396 pod_ready.go:81] duration metric: took 3.508047576s for pod "etcd-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.905090   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909413   71396 pod_ready.go:92] pod "kube-apiserver-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.909435   71396 pod_ready.go:81] duration metric: took 4.338236ms for pod "kube-apiserver-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.909447   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913150   71396 pod_ready.go:92] pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.913169   71396 pod_ready.go:81] duration metric: took 3.713217ms for pod "kube-controller-manager-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.913179   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917276   71396 pod_ready.go:92] pod "kube-proxy-g56gz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.917292   71396 pod_ready.go:81] duration metric: took 4.107042ms for pod "kube-proxy-g56gz" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.917299   71396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922272   71396 pod_ready.go:92] pod "kube-scheduler-no-preload-945581" in "kube-system" namespace has status "Ready":"True"
	I0722 00:55:33.922293   71396 pod_ready.go:81] duration metric: took 4.987007ms for pod "kube-scheduler-no-preload-945581" in "kube-system" namespace to be "Ready" ...
	I0722 00:55:33.922305   71396 pod_ready.go:38] duration metric: took 8.543672194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:33.922323   71396 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:55:33.922382   71396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:55:33.940449   71396 api_server.go:72] duration metric: took 8.808293379s to wait for apiserver process to appear ...
	I0722 00:55:33.940474   71396 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:55:33.940493   71396 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I0722 00:55:33.945335   71396 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I0722 00:55:33.946528   71396 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 00:55:33.946550   71396 api_server.go:131] duration metric: took 6.069708ms to wait for apiserver health ...
	I0722 00:55:33.946560   71396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:55:34.104920   71396 system_pods.go:59] 9 kube-system pods found
	I0722 00:55:34.104946   71396 system_pods.go:61] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.104950   71396 system_pods.go:61] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.104953   71396 system_pods.go:61] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.104957   71396 system_pods.go:61] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.104961   71396 system_pods.go:61] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.104964   71396 system_pods.go:61] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.104967   71396 system_pods.go:61] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.104973   71396 system_pods.go:61] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.104976   71396 system_pods.go:61] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.104983   71396 system_pods.go:74] duration metric: took 158.41766ms to wait for pod list to return data ...
	I0722 00:55:34.104991   71396 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:55:34.300892   71396 default_sa.go:45] found service account: "default"
	I0722 00:55:34.300917   71396 default_sa.go:55] duration metric: took 195.920215ms for default service account to be created ...
	I0722 00:55:34.300927   71396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:55:34.503892   71396 system_pods.go:86] 9 kube-system pods found
	I0722 00:55:34.503920   71396 system_pods.go:89] "coredns-5cfdc65f69-68wll" [0d9fbbef-f095-45c2-ae45-2c4be3a22e0d] Running
	I0722 00:55:34.503925   71396 system_pods.go:89] "coredns-5cfdc65f69-9j27w" [6979f6f9-75ac-49d9-adaf-71524576aad3] Running
	I0722 00:55:34.503929   71396 system_pods.go:89] "etcd-no-preload-945581" [1238e8ee-e39b-42ba-9a6a-cd76a64b7004] Running
	I0722 00:55:34.503933   71396 system_pods.go:89] "kube-apiserver-no-preload-945581" [c2f6bbe1-f9c6-435c-b84e-53cfcbff16f2] Running
	I0722 00:55:34.503937   71396 system_pods.go:89] "kube-controller-manager-no-preload-945581" [1d0f0195-570f-4e3e-b6cb-1b8c92b7464d] Running
	I0722 00:55:34.503942   71396 system_pods.go:89] "kube-proxy-g56gz" [81c84dcd-74b2-44b3-b25e-4074cfe2881d] Running
	I0722 00:55:34.503945   71396 system_pods.go:89] "kube-scheduler-no-preload-945581" [66b1b6fc-3ef5-4129-a372-1e7cd933715f] Running
	I0722 00:55:34.503951   71396 system_pods.go:89] "metrics-server-78fcd8795b-l858z" [0f17da27-a5bf-46ea-bbb8-00ee2f308542] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:55:34.503956   71396 system_pods.go:89] "storage-provisioner" [0448fcfd-604d-47b4-822e-bc0d117d3b2e] Running
	I0722 00:55:34.503964   71396 system_pods.go:126] duration metric: took 203.031012ms to wait for k8s-apps to be running ...
	I0722 00:55:34.503970   71396 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:55:34.504012   71396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:55:34.522978   71396 system_svc.go:56] duration metric: took 18.998137ms WaitForService to wait for kubelet
	I0722 00:55:34.523011   71396 kubeadm.go:582] duration metric: took 9.390857298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:55:34.523036   71396 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:55:34.702300   71396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:55:34.702326   71396 node_conditions.go:123] node cpu capacity is 2
	I0722 00:55:34.702335   71396 node_conditions.go:105] duration metric: took 179.29455ms to run NodePressure ...
	I0722 00:55:34.702348   71396 start.go:241] waiting for startup goroutines ...
	I0722 00:55:34.702354   71396 start.go:246] waiting for cluster config update ...
	I0722 00:55:34.702364   71396 start.go:255] writing updated cluster config ...
	I0722 00:55:34.702635   71396 ssh_runner.go:195] Run: rm -f paused
	I0722 00:55:34.761047   71396 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0722 00:55:34.762828   71396 out.go:177] * Done! kubectl is now configured to use "no-preload-945581" cluster and "default" namespace by default
	I0722 00:55:32.469295   71227 pod_ready.go:102] pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace has status "Ready":"False"
	I0722 00:55:34.463165   71227 pod_ready.go:81] duration metric: took 4m0.000607912s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" ...
	E0722 00:55:34.463231   71227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-dm7k7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0722 00:55:34.463253   71227 pod_ready.go:38] duration metric: took 4m12.043131734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:55:34.463279   71227 kubeadm.go:597] duration metric: took 4m20.994856278s to restartPrimaryControlPlane
	W0722 00:55:34.463346   71227 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 00:55:34.463377   71227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:55:40.633490   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:55:40.633742   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:00.634701   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:00.634950   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:05.655223   71227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.191822471s)
	I0722 00:56:05.655285   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:05.670795   71227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:56:05.680127   71227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:05.689056   71227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:05.689072   71227 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:05.689118   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0722 00:56:05.698947   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:05.699001   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:05.707735   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0722 00:56:05.716112   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:05.716175   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:05.724928   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.733413   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:05.733460   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:05.742066   71227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0722 00:56:05.750370   71227 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:05.750426   71227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:05.759124   71227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:05.814249   71227 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:56:05.814306   71227 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:56:05.955768   71227 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:56:05.955885   71227 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:56:05.956011   71227 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:56:06.170000   71227 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:56:06.171996   71227 out.go:204]   - Generating certificates and keys ...
	I0722 00:56:06.172080   71227 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:56:06.172135   71227 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:56:06.172236   71227 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:56:06.172311   71227 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:56:06.172402   71227 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:56:06.172483   71227 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:56:06.172584   71227 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:56:06.172658   71227 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:56:06.172723   71227 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:56:06.172809   71227 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:56:06.172872   71227 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:56:06.172956   71227 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:56:06.324515   71227 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:56:06.404599   71227 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:56:06.706302   71227 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:56:06.786527   71227 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:56:07.148089   71227 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:56:07.148775   71227 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:56:07.151309   71227 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:56:07.153033   71227 out.go:204]   - Booting up control plane ...
	I0722 00:56:07.153148   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:56:07.153273   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:56:07.153885   71227 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:56:07.172937   71227 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:56:07.173045   71227 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:56:07.173090   71227 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:56:07.300183   71227 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:56:07.300269   71227 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:56:08.302077   71227 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001937113s
	I0722 00:56:08.302203   71227 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:56:13.303387   71227 kubeadm.go:310] [api-check] The API server is healthy after 5.00113236s
	I0722 00:56:13.325036   71227 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:56:13.337820   71227 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:56:13.365933   71227 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:56:13.366130   71227 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-214905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:56:13.376396   71227 kubeadm.go:310] [bootstrap-token] Using token: 81m7iu.wgaegfh046xcj0bw
	I0722 00:56:13.377874   71227 out.go:204]   - Configuring RBAC rules ...
	I0722 00:56:13.377997   71227 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:56:13.387194   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:56:13.395840   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:56:13.399711   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:56:13.403370   71227 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:56:13.406167   71227 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:56:13.711728   71227 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:56:14.147363   71227 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:56:14.711903   71227 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:56:14.714465   71227 kubeadm.go:310] 
	I0722 00:56:14.714562   71227 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:56:14.714592   71227 kubeadm.go:310] 
	I0722 00:56:14.714716   71227 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:56:14.714732   71227 kubeadm.go:310] 
	I0722 00:56:14.714766   71227 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:56:14.714846   71227 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:56:14.714927   71227 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:56:14.714937   71227 kubeadm.go:310] 
	I0722 00:56:14.715014   71227 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:56:14.715021   71227 kubeadm.go:310] 
	I0722 00:56:14.715089   71227 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:56:14.715099   71227 kubeadm.go:310] 
	I0722 00:56:14.715186   71227 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:56:14.715294   71227 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:56:14.715426   71227 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:56:14.715442   71227 kubeadm.go:310] 
	I0722 00:56:14.715557   71227 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:56:14.715652   71227 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:56:14.715668   71227 kubeadm.go:310] 
	I0722 00:56:14.715798   71227 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.715952   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d \
	I0722 00:56:14.715992   71227 kubeadm.go:310] 	--control-plane 
	I0722 00:56:14.716006   71227 kubeadm.go:310] 
	I0722 00:56:14.716112   71227 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:56:14.716121   71227 kubeadm.go:310] 
	I0722 00:56:14.716222   71227 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 81m7iu.wgaegfh046xcj0bw \
	I0722 00:56:14.716367   71227 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80ccbc94ba9580996c1705dfd917104619fc36ac6d9dfc514aa97fdc535f583d 
	I0722 00:56:14.717617   71227 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:14.717638   71227 cni.go:84] Creating CNI manager for ""
	I0722 00:56:14.717648   71227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0722 00:56:14.720538   71227 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 00:56:14.721794   71227 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 00:56:14.733927   71227 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 00:56:14.751260   71227 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:56:14.751396   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:14.751398   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-214905 minikube.k8s.io/updated_at=2024_07_22T00_56_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=default-k8s-diff-port-214905 minikube.k8s.io/primary=true
	I0722 00:56:14.774754   71227 ops.go:34] apiserver oom_adj: -16
	I0722 00:56:14.931469   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.432059   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:15.931975   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.431574   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:16.932087   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.431783   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:17.932494   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.431847   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:18.932421   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.432397   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:19.931476   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.431800   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:20.931560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.431560   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:21.932566   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.431589   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:22.931482   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.431819   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:23.931863   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.432254   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:24.931686   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.432331   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:25.931809   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.432468   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:26.932464   71227 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:56:27.017084   71227 kubeadm.go:1113] duration metric: took 12.265748571s to wait for elevateKubeSystemPrivileges
	I0722 00:56:27.017121   71227 kubeadm.go:394] duration metric: took 5m13.595334887s to StartCluster
	I0722 00:56:27.017145   71227 settings.go:142] acquiring lock: {Name:mkd46b4735c946c3edc55a0e3a1e0107c5935395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.017235   71227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:56:27.018856   71227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-5094/kubeconfig: {Name:mk62254b368242377a8402f66f87931bbe831a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:56:27.019244   71227 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0722 00:56:27.019279   71227 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:56:27.019356   71227 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019378   71227 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-214905"
	I0722 00:56:27.019267   71227 config.go:182] Loaded profile config "default-k8s-diff-port-214905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:56:27.019393   71227 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-214905"
	I0722 00:56:27.019409   71227 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.019421   71227 addons.go:243] addon metrics-server should already be in state true
	I0722 00:56:27.019428   71227 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-214905"
	W0722 00:56:27.019388   71227 addons.go:243] addon storage-provisioner should already be in state true
	I0722 00:56:27.019449   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019466   71227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-214905"
	I0722 00:56:27.019497   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.019782   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019807   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019859   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019869   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.019884   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.019921   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.021236   71227 out.go:177] * Verifying Kubernetes components...
	I0722 00:56:27.022409   71227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:56:27.036892   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0722 00:56:27.036891   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0722 00:56:27.037416   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.037646   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.038122   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038144   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038106   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.038189   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.038505   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038560   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.038800   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.039251   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.039285   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.039596   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0722 00:56:27.040051   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.040619   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.040642   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.042285   71227 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-214905"
	W0722 00:56:27.042303   71227 addons.go:243] addon default-storageclass should already be in state true
	I0722 00:56:27.042341   71227 host.go:66] Checking if "default-k8s-diff-port-214905" exists ...
	I0722 00:56:27.042715   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.042738   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.042920   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.043806   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.043846   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.057683   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0722 00:56:27.058186   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058287   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0722 00:56:27.058740   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.058830   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.058849   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059215   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.059236   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.059297   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.059526   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.059669   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.060609   71227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19312-5094/.minikube/bin/docker-machine-driver-kvm2
	I0722 00:56:27.060663   71227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:56:27.061286   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.064001   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0722 00:56:27.064199   71227 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0722 00:56:27.064351   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.064849   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.064865   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.065349   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.065471   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0722 00:56:27.065483   71227 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0722 00:56:27.065495   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.065601   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.067562   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.069082   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.069254   71227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:56:27.069792   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.069915   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.069921   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.070104   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.070248   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.070404   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.070465   71227 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.070481   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:56:27.070498   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.073628   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074065   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.074091   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.074177   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.074369   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.074518   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.074994   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.080508   71227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0722 00:56:27.080919   71227 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:56:27.081452   71227 main.go:141] libmachine: Using API Version  1
	I0722 00:56:27.081476   71227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:56:27.081842   71227 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:56:27.082039   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetState
	I0722 00:56:27.083774   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .DriverName
	I0722 00:56:27.084027   71227 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.084047   71227 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:56:27.084076   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHHostname
	I0722 00:56:27.087047   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087475   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:14:d0", ip: ""} in network mk-default-k8s-diff-port-214905: {Iface:virbr3 ExpiryTime:2024-07-22 01:50:57 +0000 UTC Type:0 Mac:52:54:00:8d:14:d0 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:default-k8s-diff-port-214905 Clientid:01:52:54:00:8d:14:d0}
	I0722 00:56:27.087497   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | domain default-k8s-diff-port-214905 has defined IP address 192.168.61.97 and MAC address 52:54:00:8d:14:d0 in network mk-default-k8s-diff-port-214905
	I0722 00:56:27.087632   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHPort
	I0722 00:56:27.087787   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHKeyPath
	I0722 00:56:27.087926   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .GetSSHUsername
	I0722 00:56:27.088038   71227 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/default-k8s-diff-port-214905/id_rsa Username:docker}
	I0722 00:56:27.208950   71227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:56:27.225704   71227 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234643   71227 node_ready.go:49] node "default-k8s-diff-port-214905" has status "Ready":"True"
	I0722 00:56:27.234674   71227 node_ready.go:38] duration metric: took 8.937409ms for node "default-k8s-diff-port-214905" to be "Ready" ...
	I0722 00:56:27.234686   71227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:27.240541   71227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247458   71227 pod_ready.go:92] pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.247479   71227 pod_ready.go:81] duration metric: took 6.913431ms for pod "etcd-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.247492   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251958   71227 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.251979   71227 pod_ready.go:81] duration metric: took 4.476995ms for pod "kube-apiserver-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.251991   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260632   71227 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:27.260652   71227 pod_ready.go:81] duration metric: took 8.652689ms for pod "kube-controller-manager-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.260663   71227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:27.311711   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:56:27.314904   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0722 00:56:27.314929   71227 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0722 00:56:27.317763   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:56:27.375759   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0722 00:56:27.375792   71227 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0722 00:56:27.441746   71227 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:27.441773   71227 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0722 00:56:27.525855   71227 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0722 00:56:28.142579   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142621   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.142644   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.142627   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.143020   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.143039   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.143052   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.143061   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.144811   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144843   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.144854   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144882   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144895   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.144867   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.144913   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.144903   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.145147   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.145161   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.145180   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.173321   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.173350   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.173640   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.173656   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.266726   71227 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace has status "Ready":"True"
	I0722 00:56:28.266754   71227 pod_ready.go:81] duration metric: took 1.006081833s for pod "kube-scheduler-default-k8s-diff-port-214905" in "kube-system" namespace to be "Ready" ...
	I0722 00:56:28.266764   71227 pod_ready.go:38] duration metric: took 1.032063964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:56:28.266780   71227 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:56:28.266844   71227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:56:28.307127   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307156   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307461   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307534   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307540   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.307571   71227 main.go:141] libmachine: Making call to close driver server
	I0722 00:56:28.307585   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) Calling .Close
	I0722 00:56:28.307953   71227 main.go:141] libmachine: (default-k8s-diff-port-214905) DBG | Closing plugin on server side
	I0722 00:56:28.307976   71227 main.go:141] libmachine: Successfully made call to close driver server
	I0722 00:56:28.307996   71227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0722 00:56:28.308013   71227 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-214905"
	I0722 00:56:28.309683   71227 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0722 00:56:28.310765   71227 addons.go:510] duration metric: took 1.291480207s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0722 00:56:28.385242   71227 api_server.go:72] duration metric: took 1.365947411s to wait for apiserver process to appear ...
	I0722 00:56:28.385266   71227 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:56:28.385287   71227 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8444/healthz ...
	I0722 00:56:28.390459   71227 api_server.go:279] https://192.168.61.97:8444/healthz returned 200:
	ok
	I0722 00:56:28.391689   71227 api_server.go:141] control plane version: v1.30.3
	I0722 00:56:28.391708   71227 api_server.go:131] duration metric: took 6.436238ms to wait for apiserver health ...
	I0722 00:56:28.391716   71227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:56:28.400135   71227 system_pods.go:59] 9 kube-system pods found
	I0722 00:56:28.400169   71227 system_pods.go:61] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400176   71227 system_pods.go:61] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.400184   71227 system_pods.go:61] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.400189   71227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.400193   71227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.400199   71227 system_pods.go:61] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.400203   71227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.400209   71227 system_pods.go:61] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.400213   71227 system_pods.go:61] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.400220   71227 system_pods.go:74] duration metric: took 8.49892ms to wait for pod list to return data ...
	I0722 00:56:28.400227   71227 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:56:28.430734   71227 default_sa.go:45] found service account: "default"
	I0722 00:56:28.430757   71227 default_sa.go:55] duration metric: took 30.524587ms for default service account to be created ...
	I0722 00:56:28.430767   71227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:56:28.632635   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.632671   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632682   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.632692   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.632701   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.632709   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.632721   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.632730   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.632742   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.632754   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.632776   71227 retry.go:31] will retry after 238.143812ms: missing components: kube-dns, kube-proxy
	I0722 00:56:28.882228   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:28.882257   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882264   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:28.882271   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:28.882276   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:28.882281   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:28.882289   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:28.882295   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:28.882307   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:28.882318   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:28.882334   71227 retry.go:31] will retry after 320.753602ms: missing components: kube-dns, kube-proxy
	I0722 00:56:29.215129   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.215163   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215187   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.215197   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.215209   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.215221   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.215232   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 00:56:29.215241   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.215255   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.215267   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 00:56:29.215285   71227 retry.go:31] will retry after 458.931739ms: missing components: kube-proxy
	I0722 00:56:29.683141   71227 system_pods.go:86] 9 kube-system pods found
	I0722 00:56:29.683180   71227 system_pods.go:89] "coredns-7db6d8ff4d-4gv5m" [6db8dadd-0345-4eef-a024-bdaf97146e30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683194   71227 system_pods.go:89] "coredns-7db6d8ff4d-phh59" [5f48ef56-5d78-4a1b-b53b-b99a03114323] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 00:56:29.683205   71227 system_pods.go:89] "etcd-default-k8s-diff-port-214905" [73b9e637-e243-4ccf-bead-f9097f289431] Running
	I0722 00:56:29.683213   71227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-214905" [2636ebd4-acb4-4a81-9a48-4c226b9629d9] Running
	I0722 00:56:29.683220   71227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-214905" [ec2aabd6-cd3a-46c6-834f-5c5ec32b85ba] Running
	I0722 00:56:29.683230   71227 system_pods.go:89] "kube-proxy-th55d" [f938f331-504a-40f0-8b44-4b23cd07a93e] Running
	I0722 00:56:29.683238   71227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-214905" [a5d8a2f6-0820-4a90-b3c6-3730f8e5f7ec] Running
	I0722 00:56:29.683250   71227 system_pods.go:89] "metrics-server-569cc877fc-d4z4t" [f1a411a0-2d46-4c04-9922-eb4046852082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0722 00:56:29.683255   71227 system_pods.go:89] "storage-provisioner" [ce8b4fe1-79af-497d-8119-7ad60547fefe] Running
	I0722 00:56:29.683262   71227 system_pods.go:126] duration metric: took 1.252489422s to wait for k8s-apps to be running ...
	I0722 00:56:29.683270   71227 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:56:29.683313   71227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:29.698422   71227 system_svc.go:56] duration metric: took 15.142969ms WaitForService to wait for kubelet
	I0722 00:56:29.698453   71227 kubeadm.go:582] duration metric: took 2.679163358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:56:29.698477   71227 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:56:29.701906   71227 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:56:29.701930   71227 node_conditions.go:123] node cpu capacity is 2
	I0722 00:56:29.701939   71227 node_conditions.go:105] duration metric: took 3.458023ms to run NodePressure ...
	I0722 00:56:29.701950   71227 start.go:241] waiting for startup goroutines ...
	I0722 00:56:29.701958   71227 start.go:246] waiting for cluster config update ...
	I0722 00:56:29.701966   71227 start.go:255] writing updated cluster config ...
	I0722 00:56:29.702207   71227 ssh_runner.go:195] Run: rm -f paused
	I0722 00:56:29.763936   71227 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:56:29.765787   71227 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-214905" cluster and "default" namespace by default
	I0722 00:56:40.637375   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:56:40.637661   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:56:40.637719   71766 kubeadm.go:310] 
	I0722 00:56:40.637787   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:56:40.637855   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:56:40.637869   71766 kubeadm.go:310] 
	I0722 00:56:40.637946   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:56:40.638007   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:56:40.638123   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:56:40.638133   71766 kubeadm.go:310] 
	I0722 00:56:40.638239   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:56:40.638268   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:56:40.638297   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:56:40.638324   71766 kubeadm.go:310] 
	I0722 00:56:40.638483   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:56:40.638630   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:56:40.638644   71766 kubeadm.go:310] 
	I0722 00:56:40.638803   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:56:40.638945   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:56:40.639065   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:56:40.639174   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:56:40.639186   71766 kubeadm.go:310] 
	I0722 00:56:40.639607   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:56:40.639734   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:56:40.639843   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0722 00:56:40.640012   71766 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0722 00:56:40.640066   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0722 00:56:41.089622   71766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:56:41.103816   71766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:56:41.113816   71766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:56:41.113838   71766 kubeadm.go:157] found existing configuration files:
	
	I0722 00:56:41.113888   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:56:41.122963   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:56:41.123028   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:56:41.133449   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:56:41.143569   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:56:41.143642   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:56:41.152996   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.162591   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:56:41.162681   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:56:41.171972   71766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:56:41.181465   71766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:56:41.181534   71766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:56:41.190904   71766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:56:41.411029   71766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:58:37.359860   71766 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0722 00:58:37.360031   71766 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0722 00:58:37.361488   71766 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0722 00:58:37.361558   71766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:58:37.361653   71766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:58:37.361789   71766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:58:37.361922   71766 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:58:37.362002   71766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:58:37.363826   71766 out.go:204]   - Generating certificates and keys ...
	I0722 00:58:37.363908   71766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:58:37.363981   71766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:58:37.364060   71766 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 00:58:37.364111   71766 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 00:58:37.364178   71766 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 00:58:37.364224   71766 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 00:58:37.364291   71766 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 00:58:37.364379   71766 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 00:58:37.364484   71766 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 00:58:37.364596   71766 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 00:58:37.364662   71766 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 00:58:37.364720   71766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:58:37.364763   71766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:58:37.364808   71766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:58:37.364892   71766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:58:37.364959   71766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:58:37.365054   71766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:58:37.365167   71766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:58:37.365222   71766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:58:37.365314   71766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:58:37.366522   71766 out.go:204]   - Booting up control plane ...
	I0722 00:58:37.366615   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:58:37.366695   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:58:37.366775   71766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:58:37.366903   71766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:58:37.367078   71766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 00:58:37.367156   71766 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0722 00:58:37.367262   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367502   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367580   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.367745   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.367819   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368017   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368078   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368233   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368299   71766 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0722 00:58:37.368461   71766 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0722 00:58:37.368471   71766 kubeadm.go:310] 
	I0722 00:58:37.368519   71766 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0722 00:58:37.368567   71766 kubeadm.go:310] 		timed out waiting for the condition
	I0722 00:58:37.368578   71766 kubeadm.go:310] 
	I0722 00:58:37.368630   71766 kubeadm.go:310] 	This error is likely caused by:
	I0722 00:58:37.368695   71766 kubeadm.go:310] 		- The kubelet is not running
	I0722 00:58:37.368821   71766 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0722 00:58:37.368831   71766 kubeadm.go:310] 
	I0722 00:58:37.368945   71766 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0722 00:58:37.368999   71766 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0722 00:58:37.369050   71766 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0722 00:58:37.369060   71766 kubeadm.go:310] 
	I0722 00:58:37.369160   71766 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0722 00:58:37.369278   71766 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0722 00:58:37.369286   71766 kubeadm.go:310] 
	I0722 00:58:37.369387   71766 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0722 00:58:37.369490   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0722 00:58:37.369557   71766 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0722 00:58:37.369624   71766 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0722 00:58:37.369652   71766 kubeadm.go:310] 
	I0722 00:58:37.369677   71766 kubeadm.go:394] duration metric: took 8m3.085886913s to StartCluster
	I0722 00:58:37.369710   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0722 00:58:37.369762   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0722 00:58:37.411357   71766 cri.go:89] found id: ""
	I0722 00:58:37.411387   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.411395   71766 logs.go:278] No container was found matching "kube-apiserver"
	I0722 00:58:37.411401   71766 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0722 00:58:37.411451   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0722 00:58:37.445336   71766 cri.go:89] found id: ""
	I0722 00:58:37.445360   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.445369   71766 logs.go:278] No container was found matching "etcd"
	I0722 00:58:37.445374   71766 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0722 00:58:37.445423   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0722 00:58:37.477061   71766 cri.go:89] found id: ""
	I0722 00:58:37.477084   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.477092   71766 logs.go:278] No container was found matching "coredns"
	I0722 00:58:37.477098   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0722 00:58:37.477157   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0722 00:58:37.508974   71766 cri.go:89] found id: ""
	I0722 00:58:37.509002   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.509011   71766 logs.go:278] No container was found matching "kube-scheduler"
	I0722 00:58:37.509019   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0722 00:58:37.509078   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0722 00:58:37.542377   71766 cri.go:89] found id: ""
	I0722 00:58:37.542409   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.542419   71766 logs.go:278] No container was found matching "kube-proxy"
	I0722 00:58:37.542425   71766 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0722 00:58:37.542486   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0722 00:58:37.577327   71766 cri.go:89] found id: ""
	I0722 00:58:37.577357   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.577369   71766 logs.go:278] No container was found matching "kube-controller-manager"
	I0722 00:58:37.577377   71766 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0722 00:58:37.577443   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0722 00:58:37.616541   71766 cri.go:89] found id: ""
	I0722 00:58:37.616567   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.616574   71766 logs.go:278] No container was found matching "kindnet"
	I0722 00:58:37.616579   71766 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0722 00:58:37.616643   71766 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0722 00:58:37.651156   71766 cri.go:89] found id: ""
	I0722 00:58:37.651182   71766 logs.go:276] 0 containers: []
	W0722 00:58:37.651192   71766 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0722 00:58:37.651202   71766 logs.go:123] Gathering logs for container status ...
	I0722 00:58:37.651217   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 00:58:37.696577   71766 logs.go:123] Gathering logs for kubelet ...
	I0722 00:58:37.696614   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 00:58:37.751093   71766 logs.go:123] Gathering logs for dmesg ...
	I0722 00:58:37.751128   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 00:58:37.764949   71766 logs.go:123] Gathering logs for describe nodes ...
	I0722 00:58:37.764975   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0722 00:58:37.852490   71766 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0722 00:58:37.852509   71766 logs.go:123] Gathering logs for CRI-O ...
	I0722 00:58:37.852521   71766 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0722 00:58:37.956810   71766 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0722 00:58:37.956861   71766 out.go:239] * 
	W0722 00:58:37.956923   71766 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.956944   71766 out.go:239] * 
	W0722 00:58:37.957872   71766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 00:58:37.961112   71766 out.go:177] 
	W0722 00:58:37.962353   71766 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0722 00:58:37.962402   71766 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0722 00:58:37.962422   71766 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0722 00:58:37.963746   71766 out.go:177] 
	
	
	==> CRI-O <==
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.581861247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610616581832946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc9e4ac5-e7a7-4217-a62c-baaf32d6e1f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.582593126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7d37d1c-c519-43f8-929b-b1996059f68c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.582644249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7d37d1c-c519-43f8-929b-b1996059f68c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.582678621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f7d37d1c-c519-43f8-929b-b1996059f68c name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.614002521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c78b4fa-ad0b-44c2-87ee-037ff27d123d name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.614100281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c78b4fa-ad0b-44c2-87ee-037ff27d123d name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.615354251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71e2b459-e4aa-4abc-a199-4da7cc1027af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.615894009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610616615858181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71e2b459-e4aa-4abc-a199-4da7cc1027af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.616588764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83c236d7-1aa1-4255-96bb-b4b8d7dcb1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.616656419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83c236d7-1aa1-4255-96bb-b4b8d7dcb1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.616693864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=83c236d7-1aa1-4255-96bb-b4b8d7dcb1e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.647208625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=252a7339-c7c1-4bd2-85ef-440d55f15495 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.647300445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=252a7339-c7c1-4bd2-85ef-440d55f15495 name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.648443582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=857ae78a-56a7-4172-86a4-bb4ade2159d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.648817997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610616648797255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=857ae78a-56a7-4172-86a4-bb4ade2159d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.649284980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=611bf5aa-824d-44fe-b3ca-b356b448c6ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.649367514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=611bf5aa-824d-44fe-b3ca-b356b448c6ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.649439807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=611bf5aa-824d-44fe-b3ca-b356b448c6ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.682326994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b27986a-5727-4e58-96be-432287129c9e name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.682449559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b27986a-5727-4e58-96be-432287129c9e name=/runtime.v1.RuntimeService/Version
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.683956447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddc3b976-0932-42a2-8f42-701a08e12f59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.684341608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721610616684314752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddc3b976-0932-42a2-8f42-701a08e12f59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.684937330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=075a243e-4885-4ed6-aaf3-e9bf1017d270 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.685009619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=075a243e-4885-4ed6-aaf3-e9bf1017d270 name=/runtime.v1.RuntimeService/ListContainers
	Jul 22 01:10:16 old-k8s-version-366657 crio[629]: time="2024-07-22 01:10:16.685051817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=075a243e-4885-4ed6-aaf3-e9bf1017d270 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul22 00:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051104] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496567] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.796830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544248] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.276300] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
	[  +0.064156] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073267] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.169185] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.171264] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.282291] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +6.446308] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.069249] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.917900] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[ +11.851684] kauditd_printk_skb: 46 callbacks suppressed
	[Jul22 00:54] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul22 00:56] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +0.066214] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:10:16 up 20 min,  0 users,  load average: 0.06, 0.05, 0.00
	Linux old-k8s-version-366657 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000dd2e10)
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: goroutine 169 [select]:
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000be3ef0, 0x4f0ac20, 0xc000ce5720, 0x1, 0xc0001020c0)
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c961c0, 0xc0001020c0)
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ccc5f0, 0xc000ce30a0)
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 22 01:10:13 old-k8s-version-366657 kubelet[6849]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 22 01:10:13 old-k8s-version-366657 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 22 01:10:13 old-k8s-version-366657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 22 01:10:14 old-k8s-version-366657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 141.
	Jul 22 01:10:14 old-k8s-version-366657 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 22 01:10:14 old-k8s-version-366657 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 22 01:10:14 old-k8s-version-366657 kubelet[6858]: I0722 01:10:14.278745    6858 server.go:416] Version: v1.20.0
	Jul 22 01:10:14 old-k8s-version-366657 kubelet[6858]: I0722 01:10:14.279084    6858 server.go:837] Client rotation is on, will bootstrap in background
	Jul 22 01:10:14 old-k8s-version-366657 kubelet[6858]: I0722 01:10:14.282968    6858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 22 01:10:14 old-k8s-version-366657 kubelet[6858]: W0722 01:10:14.284796    6858 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 22 01:10:14 old-k8s-version-366657 kubelet[6858]: I0722 01:10:14.285673    6858 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 2 (225.76293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-366657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.37s)

                                                
                                    

Test pass (255/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 12.46
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 11.57
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.12
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 148.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 140.95
38 TestAddons/parallel/Registry 18.65
40 TestAddons/parallel/InspektorGadget 11.76
42 TestAddons/parallel/HelmTiller 13.43
44 TestAddons/parallel/CSI 105.2
45 TestAddons/parallel/Headlamp 13.94
46 TestAddons/parallel/CloudSpanner 5.52
47 TestAddons/parallel/LocalPath 54.91
48 TestAddons/parallel/NvidiaDevicePlugin 7.14
49 TestAddons/parallel/Yakd 5.01
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 71.84
56 TestCertExpiration 329.79
58 TestForceSystemdFlag 68.27
59 TestForceSystemdEnv 67.06
61 TestKVMDriverInstallOrUpdate 3.78
65 TestErrorSpam/setup 37.02
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.69
68 TestErrorSpam/pause 1.46
69 TestErrorSpam/unpause 1.49
70 TestErrorSpam/stop 5.08
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 55.18
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.06
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
82 TestFunctional/serial/CacheCmd/cache/add_local 2.01
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 59.06
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.29
93 TestFunctional/serial/LogsFileCmd 1.27
94 TestFunctional/serial/InvalidService 4.46
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 29.97
98 TestFunctional/parallel/DryRun 0.37
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.72
104 TestFunctional/parallel/ServiceCmdConnect 10.57
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 43.89
108 TestFunctional/parallel/SSHCmd 0.43
109 TestFunctional/parallel/CpCmd 1.2
110 TestFunctional/parallel/MySQL 23.9
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.46
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
120 TestFunctional/parallel/License 0.54
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
132 TestFunctional/parallel/ProfileCmd/profile_list 0.26
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
134 TestFunctional/parallel/MountCmd/any-port 8.31
135 TestFunctional/parallel/MountCmd/specific-port 1.58
136 TestFunctional/parallel/ServiceCmd/List 0.32
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
139 TestFunctional/parallel/MountCmd/VerifyCleanup 0.9
140 TestFunctional/parallel/ServiceCmd/Format 0.36
141 TestFunctional/parallel/ServiceCmd/URL 0.4
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
146 TestFunctional/parallel/ImageCommands/ImageBuild 6.76
147 TestFunctional/parallel/ImageCommands/Setup 1.74
148 TestFunctional/parallel/Version/short 0.05
149 TestFunctional/parallel/Version/components 0.47
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.7
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.54
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.79
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.52
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 194.12
167 TestMultiControlPlane/serial/DeployApp 5.93
168 TestMultiControlPlane/serial/PingHostFromPods 1.15
169 TestMultiControlPlane/serial/AddWorkerNode 56.84
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.2
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.22
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 348.55
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.35
183 TestMultiControlPlane/serial/AddSecondaryNode 75.57
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
188 TestJSONOutput/start/Command 91.7
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.66
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.57
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.57
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 88.35
220 TestMountStart/serial/StartWithMountFirst 23.81
221 TestMountStart/serial/VerifyMountFirst 0.35
222 TestMountStart/serial/StartWithMountSecond 26.56
223 TestMountStart/serial/VerifyMountSecond 0.35
224 TestMountStart/serial/DeleteFirst 0.66
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.26
227 TestMountStart/serial/RestartStopped 23.75
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 114.39
232 TestMultiNode/serial/DeployApp2Nodes 5.18
233 TestMultiNode/serial/PingHostFrom2Pods 0.73
234 TestMultiNode/serial/AddNode 47.96
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.86
238 TestMultiNode/serial/StopNode 2.18
239 TestMultiNode/serial/StartAfterStop 37.87
241 TestMultiNode/serial/DeleteNode 2.34
243 TestMultiNode/serial/RestartMultiNode 174.57
244 TestMultiNode/serial/ValidateNameConflict 41.85
251 TestScheduledStopUnix 109.71
255 TestRunningBinaryUpgrade 178.37
266 TestNetworkPlugins/group/false 2.79
277 TestStoppedBinaryUpgrade/Setup 2.3
278 TestStoppedBinaryUpgrade/Upgrade 110.55
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
282 TestNoKubernetes/serial/StartWithK8s 41.55
283 TestNoKubernetes/serial/StartWithStopK8s 13.15
284 TestNoKubernetes/serial/Start 28.56
286 TestPause/serial/Start 73.94
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
288 TestNoKubernetes/serial/ProfileList 0.69
289 TestNoKubernetes/serial/Stop 1.29
290 TestNoKubernetes/serial/StartNoArgs 68.21
291 TestNetworkPlugins/group/auto/Start 82.94
292 TestPause/serial/SecondStartNoReconfiguration 61.47
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
294 TestNetworkPlugins/group/kindnet/Start 106.89
295 TestNetworkPlugins/group/calico/Start 146.03
296 TestPause/serial/Pause 0.73
297 TestPause/serial/VerifyStatus 0.25
298 TestPause/serial/Unpause 0.7
299 TestPause/serial/PauseAgain 0.94
300 TestNetworkPlugins/group/auto/KubeletFlags 0.25
301 TestNetworkPlugins/group/auto/NetCatPod 11.3
302 TestPause/serial/DeletePaused 1.14
303 TestPause/serial/VerifyDeletedResources 0.62
304 TestNetworkPlugins/group/custom-flannel/Start 102.78
305 TestNetworkPlugins/group/auto/DNS 0.18
306 TestNetworkPlugins/group/auto/Localhost 0.14
307 TestNetworkPlugins/group/auto/HairPin 0.13
308 TestNetworkPlugins/group/enable-default-cni/Start 85.79
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
311 TestNetworkPlugins/group/kindnet/NetCatPod 11.7
312 TestNetworkPlugins/group/kindnet/DNS 0.18
313 TestNetworkPlugins/group/kindnet/Localhost 0.16
314 TestNetworkPlugins/group/kindnet/HairPin 0.16
315 TestNetworkPlugins/group/flannel/Start 82.37
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.24
318 TestNetworkPlugins/group/calico/NetCatPod 12.25
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
321 TestNetworkPlugins/group/calico/DNS 0.16
322 TestNetworkPlugins/group/calico/Localhost 0.14
323 TestNetworkPlugins/group/calico/HairPin 0.13
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
326 TestNetworkPlugins/group/custom-flannel/DNS 0.23
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
332 TestNetworkPlugins/group/bridge/Start 61.32
336 TestStartStop/group/no-preload/serial/FirstStart 140.81
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
339 TestNetworkPlugins/group/flannel/NetCatPod 11.25
340 TestNetworkPlugins/group/flannel/DNS 0.21
341 TestNetworkPlugins/group/flannel/Localhost 0.18
342 TestNetworkPlugins/group/flannel/HairPin 0.22
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
344 TestNetworkPlugins/group/bridge/NetCatPod 9.22
345 TestNetworkPlugins/group/bridge/DNS 0.15
346 TestNetworkPlugins/group/bridge/Localhost 0.14
347 TestNetworkPlugins/group/bridge/HairPin 0.13
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.16
351 TestStartStop/group/newest-cni/serial/FirstStart 64.66
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
357 TestStartStop/group/newest-cni/serial/Stop 10.31
358 TestStartStop/group/no-preload/serial/DeployApp 11.28
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
360 TestStartStop/group/newest-cni/serial/SecondStart 34.3
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
366 TestStartStop/group/newest-cni/serial/Pause 4.08
368 TestStartStop/group/embed-certs/serial/FirstStart 57.26
369 TestStartStop/group/embed-certs/serial/DeployApp 9.26
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 678.47
377 TestStartStop/group/no-preload/serial/SecondStart 606.85
378 TestStartStop/group/old-k8s-version/serial/Stop 3.28
379 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
382 TestStartStop/group/embed-certs/serial/SecondStart 494.06
x
+
TestDownloadOnly/v1.20.0/json-events (26.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-825436 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-825436 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.131431356s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-825436
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-825436: exit status 85 (55.228981ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:24 UTC |          |
	|         | -p download-only-825436        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:24:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:24:41.359021   12275 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:24:41.359308   12275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:24:41.359318   12275 out.go:304] Setting ErrFile to fd 2...
	I0721 23:24:41.359323   12275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:24:41.359506   12275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	W0721 23:24:41.359623   12275 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19312-5094/.minikube/config/config.json: open /home/jenkins/minikube-integration/19312-5094/.minikube/config/config.json: no such file or directory
	I0721 23:24:41.360160   12275 out.go:298] Setting JSON to true
	I0721 23:24:41.360985   12275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":425,"bootTime":1721603856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:24:41.361040   12275 start.go:139] virtualization: kvm guest
	I0721 23:24:41.363433   12275 out.go:97] [download-only-825436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0721 23:24:41.363533   12275 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball: no such file or directory
	I0721 23:24:41.363571   12275 notify.go:220] Checking for updates...
	I0721 23:24:41.364810   12275 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:24:41.366044   12275 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:24:41.367300   12275 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:24:41.368437   12275 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:24:41.369679   12275 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0721 23:24:41.371707   12275 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:24:41.371922   12275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:24:41.471864   12275 out.go:97] Using the kvm2 driver based on user configuration
	I0721 23:24:41.471894   12275 start.go:297] selected driver: kvm2
	I0721 23:24:41.471903   12275 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:24:41.472219   12275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:24:41.472349   12275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:24:41.486680   12275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:24:41.486728   12275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:24:41.487220   12275 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0721 23:24:41.487394   12275 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:24:41.487419   12275 cni.go:84] Creating CNI manager for ""
	I0721 23:24:41.487430   12275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:24:41.487440   12275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:24:41.487537   12275 start.go:340] cluster config:
	{Name:download-only-825436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-825436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:24:41.487735   12275 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:24:41.489593   12275 out.go:97] Downloading VM boot image ...
	I0721 23:24:41.489640   12275 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:24:54.200113   12275 out.go:97] Starting "download-only-825436" primary control-plane node in "download-only-825436" cluster
	I0721 23:24:54.200131   12275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0721 23:24:54.299144   12275 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0721 23:24:54.299188   12275 cache.go:56] Caching tarball of preloaded images
	I0721 23:24:54.299349   12275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0721 23:24:54.301400   12275 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0721 23:24:54.301430   12275 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0721 23:24:54.405153   12275 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0721 23:25:05.854082   12275 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0721 23:25:05.854168   12275 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-825436 host does not exist
	  To start a cluster, run: "minikube start -p download-only-825436"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-825436
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-576339 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-576339 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.46041853s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-576339
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-576339: exit status 85 (55.397981ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:24 UTC |                     |
	|         | -p download-only-825436        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-825436        | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| start   | -o=json --download-only        | download-only-576339 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-576339        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:25:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:25:07.791315   12534 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:25:07.791403   12534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:07.791411   12534 out.go:304] Setting ErrFile to fd 2...
	I0721 23:25:07.791415   12534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:07.791593   12534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:25:07.792102   12534 out.go:298] Setting JSON to true
	I0721 23:25:07.792918   12534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":452,"bootTime":1721603856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:25:07.792969   12534 start.go:139] virtualization: kvm guest
	I0721 23:25:07.795056   12534 out.go:97] [download-only-576339] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:25:07.795201   12534 notify.go:220] Checking for updates...
	I0721 23:25:07.796736   12534 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:25:07.798635   12534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:25:07.799967   12534 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:25:07.801061   12534 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:07.802175   12534 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0721 23:25:07.804103   12534 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:25:07.804298   12534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:25:07.834977   12534 out.go:97] Using the kvm2 driver based on user configuration
	I0721 23:25:07.834996   12534 start.go:297] selected driver: kvm2
	I0721 23:25:07.835004   12534 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:25:07.835311   12534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:07.835385   12534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:25:07.849789   12534 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:25:07.849828   12534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:25:07.850288   12534 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0721 23:25:07.850423   12534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:25:07.850444   12534 cni.go:84] Creating CNI manager for ""
	I0721 23:25:07.850451   12534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:25:07.850462   12534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:25:07.850509   12534 start.go:340] cluster config:
	{Name:download-only-576339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-576339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:25:07.850587   12534 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:07.852293   12534 out.go:97] Starting "download-only-576339" primary control-plane node in "download-only-576339" cluster
	I0721 23:25:07.852308   12534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:08.371587   12534 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0721 23:25:08.371618   12534 cache.go:56] Caching tarball of preloaded images
	I0721 23:25:08.371772   12534 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0721 23:25:08.373662   12534 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0721 23:25:08.373678   12534 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0721 23:25:08.476063   12534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-576339 host does not exist
	  To start a cluster, run: "minikube start -p download-only-576339"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-576339
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (11.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-870595 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-870595 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.564979469s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (11.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-870595
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-870595: exit status 85 (54.927642ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:24 UTC |                     |
	|         | -p download-only-825436             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-825436             | download-only-825436 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| start   | -o=json --download-only             | download-only-576339 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-576339             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-576339             | download-only-576339 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| start   | -o=json --download-only             | download-only-870595 | jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-870595             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:25:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:25:20.555920   12758 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:25:20.556519   12758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:20.556542   12758 out.go:304] Setting ErrFile to fd 2...
	I0721 23:25:20.556551   12758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:20.556971   12758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:25:20.557925   12758 out.go:298] Setting JSON to true
	I0721 23:25:20.558822   12758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":465,"bootTime":1721603856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:25:20.558886   12758 start.go:139] virtualization: kvm guest
	I0721 23:25:20.560872   12758 out.go:97] [download-only-870595] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:25:20.561021   12758 notify.go:220] Checking for updates...
	I0721 23:25:20.562332   12758 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:25:20.563694   12758 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:25:20.565192   12758 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:25:20.566665   12758 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:25:20.567909   12758 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0721 23:25:20.570512   12758 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:25:20.570749   12758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:25:20.602688   12758 out.go:97] Using the kvm2 driver based on user configuration
	I0721 23:25:20.602710   12758 start.go:297] selected driver: kvm2
	I0721 23:25:20.602721   12758 start.go:901] validating driver "kvm2" against <nil>
	I0721 23:25:20.603083   12758 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:20.603166   12758 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-5094/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0721 23:25:20.617950   12758 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0721 23:25:20.618000   12758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:25:20.618652   12758 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0721 23:25:20.618851   12758 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:25:20.618924   12758 cni.go:84] Creating CNI manager for ""
	I0721 23:25:20.618939   12758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0721 23:25:20.618953   12758 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:25:20.619024   12758 start.go:340] cluster config:
	{Name:download-only-870595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-870595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:25:20.619151   12758 iso.go:125] acquiring lock: {Name:mk1c358d2514c457d22859dd20040df877cb9d42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:20.620876   12758 out.go:97] Starting "download-only-870595" primary control-plane node in "download-only-870595" cluster
	I0721 23:25:20.620891   12758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0721 23:25:21.139822   12758 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0721 23:25:21.139850   12758 cache.go:56] Caching tarball of preloaded images
	I0721 23:25:21.139993   12758 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0721 23:25:21.142021   12758 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0721 23:25:21.142039   12758 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0721 23:25:21.245314   12758 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19312-5094/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-870595 host does not exist
	  To start a cluster, run: "minikube start -p download-only-870595"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-870595
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-302887 --alsologtostderr --binary-mirror http://127.0.0.1:36193 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-302887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-302887
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (148.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-897769 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-897769 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m27.841191337s)
helpers_test.go:175: Cleaning up "offline-crio-897769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-897769
--- PASS: TestOffline (148.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-688294
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-688294: exit status 85 (51.51554ms)

                                                
                                                
-- stdout --
	* Profile "addons-688294" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-688294"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-688294
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-688294: exit status 85 (50.736789ms)

                                                
                                                
-- stdout --
	* Profile "addons-688294" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-688294"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-688294 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-688294 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.950006707s)
--- PASS: TestAddons/Setup (140.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 22.54789ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-f6bxb" [8ed372bf-f96f-42fa-a8f1-eddc6650451c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007424982s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2gnkd" [a7a0e03d-5c29-4e30-9118-ff8299b7ca06] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004592493s
addons_test.go:342: (dbg) Run:  kubectl --context addons-688294 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-688294 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-688294 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.832254337s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2gvpj" [4c6d3e92-9ecd-4931-9106-b6cd6514d6c8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004622947s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-688294
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-688294: (5.755968178s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.43s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 23.360721ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-7tqs9" [c6255c6f-8301-451a-905c-7aabaac5493c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004858391s
addons_test.go:475: (dbg) Run:  kubectl --context addons-688294 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-688294 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.818078046s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (105.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.043258ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-688294 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-688294 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [95ecfb4f-6927-4b32-9253-c29e65ba068a] Pending
helpers_test.go:344: "task-pv-pod" [95ecfb4f-6927-4b32-9253-c29e65ba068a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [95ecfb4f-6927-4b32-9253-c29e65ba068a] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004939881s
addons_test.go:586: (dbg) Run:  kubectl --context addons-688294 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-688294 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-688294 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-688294 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-688294 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-688294 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-688294 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1cb0c5da-aaf8-4f50-9dcb-ccb0cb282c77] Pending
helpers_test.go:344: "task-pv-pod-restore" [1cb0c5da-aaf8-4f50-9dcb-ccb0cb282c77] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1cb0c5da-aaf8-4f50-9dcb-ccb0cb282c77] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003570554s
addons_test.go:628: (dbg) Run:  kubectl --context addons-688294 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-688294 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-688294 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.718974381s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (105.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-688294 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-2gjtz" [38892129-f578-47c0-8299-1968efa46c65] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-2gjtz" [38892129-f578-47c0-8299-1968efa46c65] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-2gjtz" [38892129-f578-47c0-8299-1968efa46c65] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003679966s
--- PASS: TestAddons/parallel/Headlamp (13.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-82gjz" [09dad483-0317-407b-88d7-2d5669426eee] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003477985s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-688294
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-688294 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-688294 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a5b242d5-9601-4729-bd0e-55310dc94ccf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a5b242d5-9601-4729-bd0e-55310dc94ccf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a5b242d5-9601-4729-bd0e-55310dc94ccf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003557456s
addons_test.go:992: (dbg) Run:  kubectl --context addons-688294 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 ssh "cat /opt/local-path-provisioner/pvc-46a377b6-b11e-4fc9-9633-78f2e49f996d_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-688294 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-688294 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-688294 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-688294 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.142429156s)
--- PASS: TestAddons/parallel/LocalPath (54.91s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mqmww" [8f13b775-6ef2-4604-a624-4a861b5001b1] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004916093s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-688294
addons_test.go:1056: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-688294: (1.135055124s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-7mmml" [a67c330a-d2bc-44b5-8cf9-8245a6e01af8] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004319317s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-688294 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-688294 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (71.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-666395 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-666395 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m10.637592485s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-666395 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-666395 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-666395 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-666395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-666395
--- PASS: TestCertOptions (71.84s)

                                                
                                    
x
+
TestCertExpiration (329.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-576705 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-576705 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m46.831073177s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-576705 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-576705 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.110243778s)
helpers_test.go:175: Cleaning up "cert-expiration-576705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-576705
--- PASS: TestCertExpiration (329.79s)

                                                
                                    
x
+
TestForceSystemdFlag (68.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-959082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-959082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.320905251s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-959082 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-959082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-959082
--- PASS: TestForceSystemdFlag (68.27s)

                                                
                                    
x
+
TestForceSystemdEnv (67.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-332204 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-332204 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.56590715s)
helpers_test.go:175: Cleaning up "force-systemd-env-332204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-332204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-332204: (1.491531446s)
--- PASS: TestForceSystemdEnv (67.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                    
x
+
TestErrorSpam/setup (37.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-532865 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-532865 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-532865 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-532865 --driver=kvm2  --container-runtime=crio: (37.016126408s)
--- PASS: TestErrorSpam/setup (37.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (5.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop: (1.504509838s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop: (1.516982874s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-532865 --log_dir /tmp/nospam-532865 stop: (2.060790324s)
--- PASS: TestErrorSpam/stop (5.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19312-5094/.minikube/files/etc/test/nested/copy/12263/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0721 23:37:54.282803   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.288530   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.298778   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.319092   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.359374   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.439679   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.600075   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:54.920721   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:55.561651   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:56.842578   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:37:59.404342   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:38:04.525146   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-135358 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.179108414s)
--- PASS: TestFunctional/serial/StartWithProxy (55.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --alsologtostderr -v=8
E0721 23:38:14.765975   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:38:35.247047   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-135358 --alsologtostderr -v=8: (35.06035812s)
functional_test.go:659: soft start took 35.060944308s for "functional-135358" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-135358 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:3.1: (1.222374827s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:3.3: (1.183703661s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 cache add registry.k8s.io/pause:latest: (1.161772352s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-135358 /tmp/TestFunctionalserialCacheCmdcacheadd_local3104815461/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache add minikube-local-cache-test:functional-135358
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 cache add minikube-local-cache-test:functional-135358: (1.709795897s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache delete minikube-local-cache-test:functional-135358
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-135358
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (199.933602ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 kubectl -- --context functional-135358 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-135358 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0721 23:39:16.208693   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-135358 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.057172872s)
functional_test.go:757: restart took 59.057309266s for "functional-135358" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (59.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-135358 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 logs: (1.290572463s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 logs --file /tmp/TestFunctionalserialLogsFileCmd2792776016/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 logs --file /tmp/TestFunctionalserialLogsFileCmd2792776016/001/logs.txt: (1.269154611s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-135358 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-135358
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-135358: exit status 115 (256.205304ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.121:31314 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-135358 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-135358 delete -f testdata/invalidsvc.yaml: (1.023198329s)
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 config get cpus: exit status 14 (65.045055ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 config get cpus: exit status 14 (43.431647ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135358 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135358 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22143: os: process already finished
E0721 23:40:38.129872   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (29.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135358 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (213.214524ms)

                                                
                                                
-- stdout --
	* [functional-135358] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:40:06.973842   21592 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:06.973981   21592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:06.973992   21592 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:06.973998   21592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:06.974304   21592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:06.974922   21592 out.go:298] Setting JSON to false
	I0721 23:40:06.976006   21592 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1351,"bootTime":1721603856,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:40:06.976080   21592 start.go:139] virtualization: kvm guest
	I0721 23:40:06.978300   21592 out.go:177] * [functional-135358] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0721 23:40:06.980559   21592 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:40:06.980643   21592 notify.go:220] Checking for updates...
	I0721 23:40:06.983381   21592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:40:06.984793   21592 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:40:06.986050   21592 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:06.987069   21592 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:40:06.988225   21592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:40:06.989734   21592 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:06.990351   21592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:06.990411   21592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:07.010477   21592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0721 23:40:07.010902   21592 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:07.011565   21592 main.go:141] libmachine: Using API Version  1
	I0721 23:40:07.011596   21592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:07.011915   21592 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:07.012061   21592 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:07.012286   21592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:40:07.012671   21592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:07.012702   21592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:07.031244   21592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I0721 23:40:07.031609   21592 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:07.032090   21592 main.go:141] libmachine: Using API Version  1
	I0721 23:40:07.032116   21592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:07.032468   21592 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:07.032690   21592 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:07.065885   21592 out.go:177] * Using the kvm2 driver based on existing profile
	I0721 23:40:07.067162   21592 start.go:297] selected driver: kvm2
	I0721 23:40:07.067178   21592 start.go:901] validating driver "kvm2" against &{Name:functional-135358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-135358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:40:07.067331   21592 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:40:07.069615   21592 out.go:177] 
	W0721 23:40:07.070801   21592 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0721 23:40:07.071994   21592 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135358 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135358 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.813442ms)

                                                
                                                
-- stdout --
	* [functional-135358] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 23:40:07.773488   21897 out.go:291] Setting OutFile to fd 1 ...
	I0721 23:40:07.773611   21897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:07.773620   21897 out.go:304] Setting ErrFile to fd 2...
	I0721 23:40:07.773625   21897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:40:07.773915   21897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0721 23:40:07.774424   21897 out.go:298] Setting JSON to false
	I0721 23:40:07.775395   21897 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1352,"bootTime":1721603856,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0721 23:40:07.775452   21897 start.go:139] virtualization: kvm guest
	I0721 23:40:07.777629   21897 out.go:177] * [functional-135358] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0721 23:40:07.778805   21897 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:40:07.778849   21897 notify.go:220] Checking for updates...
	I0721 23:40:07.781048   21897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:40:07.782134   21897 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0721 23:40:07.783270   21897 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0721 23:40:07.784536   21897 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0721 23:40:07.785741   21897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:40:07.787346   21897 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0721 23:40:07.787757   21897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:07.787827   21897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:07.803189   21897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0721 23:40:07.803595   21897 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:07.804136   21897 main.go:141] libmachine: Using API Version  1
	I0721 23:40:07.804155   21897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:07.804544   21897 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:07.804747   21897 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:07.805001   21897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:40:07.805435   21897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0721 23:40:07.805478   21897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0721 23:40:07.821652   21897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0721 23:40:07.822088   21897 main.go:141] libmachine: () Calling .GetVersion
	I0721 23:40:07.822614   21897 main.go:141] libmachine: Using API Version  1
	I0721 23:40:07.822637   21897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0721 23:40:07.822959   21897 main.go:141] libmachine: () Calling .GetMachineName
	I0721 23:40:07.823149   21897 main.go:141] libmachine: (functional-135358) Calling .DriverName
	I0721 23:40:07.858942   21897 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0721 23:40:07.860178   21897 start.go:297] selected driver: kvm2
	I0721 23:40:07.860198   21897 start.go:901] validating driver "kvm2" against &{Name:functional-135358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-135358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:40:07.860341   21897 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:40:07.862742   21897 out.go:177] 
	W0721 23:40:07.863955   21897 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0721 23:40:07.865178   21897 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-135358 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-135358 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-8tx4x" [063a30aa-7be6-4ad1-8638-4256105571c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-8tx4x" [063a30aa-7be6-4ad1-8638-4256105571c9] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003472352s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.121:31983
functional_test.go:1671: http://192.168.39.121:31983: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-8tx4x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.121:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.121:31983
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [79a6f12d-7a1c-49ea-b443-8c286cbd916d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004901856s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-135358 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-135358 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-135358 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-135358 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-135358 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f745d571-c818-4b9e-ac8f-00808e043765] Pending
helpers_test.go:344: "sp-pod" [f745d571-c818-4b9e-ac8f-00808e043765] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f745d571-c818-4b9e-ac8f-00808e043765] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.00328858s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-135358 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-135358 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-135358 delete -f testdata/storage-provisioner/pod.yaml: (2.224816801s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-135358 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b290d0a0-d80b-47fe-a603-6d6e27669eb7] Pending
helpers_test.go:344: "sp-pod" [b290d0a0-d80b-47fe-a603-6d6e27669eb7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b290d0a0-d80b-47fe-a603-6d6e27669eb7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004294959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-135358 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh -n functional-135358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cp functional-135358:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1385613831/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh -n functional-135358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh -n functional-135358 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-135358 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8xfzw" [d09c4559-2409-4fbf-af15-d4f5762716f5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8xfzw" [d09c4559-2409-4fbf-af15-d4f5762716f5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003984928s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-135358 exec mysql-64454c8b5c-8xfzw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-135358 exec mysql-64454c8b5c-8xfzw -- mysql -ppassword -e "show databases;": exit status 1 (122.378309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-135358 exec mysql-64454c8b5c-8xfzw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-135358 exec mysql-64454c8b5c-8xfzw -- mysql -ppassword -e "show databases;": exit status 1 (149.123961ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-135358 exec mysql-64454c8b5c-8xfzw -- mysql -ppassword -e "show databases;"
2024/07/21 23:40:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (23.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12263/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /etc/test/nested/copy/12263/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12263.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /etc/ssl/certs/12263.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12263.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /usr/share/ca-certificates/12263.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/122632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /etc/ssl/certs/122632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/122632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /usr/share/ca-certificates/122632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-135358 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh "sudo systemctl is-active docker": exit status 1 (244.200473ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh "sudo systemctl is-active containerd": exit status 1 (269.259027ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-135358 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-135358 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-l6dwh" [9044f618-b96d-4c1b-929c-b29a567c61ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-l6dwh" [9044f618-b96d-4c1b-929c-b29a567c61ba] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004264158s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "214.405656ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "42.452475ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "228.242657ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.063438ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdany-port3516942528/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721605196943981210" to /tmp/TestFunctionalparallelMountCmdany-port3516942528/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721605196943981210" to /tmp/TestFunctionalparallelMountCmdany-port3516942528/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721605196943981210" to /tmp/TestFunctionalparallelMountCmdany-port3516942528/001/test-1721605196943981210
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.359644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 21 23:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 21 23:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 21 23:39 test-1721605196943981210
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh cat /mount-9p/test-1721605196943981210
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-135358 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8a08654d-1461-411a-8f79-c23b9b7e0d89] Pending
helpers_test.go:344: "busybox-mount" [8a08654d-1461-411a-8f79-c23b9b7e0d89] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8a08654d-1461-411a-8f79-c23b9b7e0d89] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8a08654d-1461-411a-8f79-c23b9b7e0d89] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003737208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-135358 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdany-port3516942528/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdspecific-port1474885429/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (182.004595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdspecific-port1474885429/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh "sudo umount -f /mount-9p": exit status 1 (265.914638ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-135358 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdspecific-port1474885429/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service list -o json
functional_test.go:1490: Took "339.059831ms" to run "out/minikube-linux-amd64 -p functional-135358 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.121:31938
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-135358 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135358 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3822480890/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.121:31938
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135358 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-135358
localhost/kicbase/echo-server:functional-135358
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135358 image ls --format short --alsologtostderr:
I0721 23:40:25.543284   22815 out.go:291] Setting OutFile to fd 1 ...
I0721 23:40:25.543762   22815 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:25.543815   22815 out.go:304] Setting ErrFile to fd 2...
I0721 23:40:25.543833   22815 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:25.544340   22815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
I0721 23:40:25.545068   22815 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:25.545189   22815 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:25.545559   22815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:25.545597   22815 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:25.560016   22815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
I0721 23:40:25.560419   22815 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:25.560988   22815 main.go:141] libmachine: Using API Version  1
I0721 23:40:25.561011   22815 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:25.561313   22815 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:25.561518   22815 main.go:141] libmachine: (functional-135358) Calling .GetState
I0721 23:40:25.563255   22815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:25.563298   22815 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:25.577914   22815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37985
I0721 23:40:25.578415   22815 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:25.578942   22815 main.go:141] libmachine: Using API Version  1
I0721 23:40:25.578967   22815 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:25.579280   22815 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:25.579456   22815 main.go:141] libmachine: (functional-135358) Calling .DriverName
I0721 23:40:25.579646   22815 ssh_runner.go:195] Run: systemctl --version
I0721 23:40:25.579666   22815 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
I0721 23:40:25.582219   22815 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:25.582593   22815 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
I0721 23:40:25.582665   22815 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:25.582810   22815 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
I0721 23:40:25.582979   22815 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
I0721 23:40:25.583164   22815 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
I0721 23:40:25.583289   22815 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
I0721 23:40:25.660912   22815 ssh_runner.go:195] Run: sudo crictl images --output json
I0721 23:40:25.717749   22815 main.go:141] libmachine: Making call to close driver server
I0721 23:40:25.717761   22815 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:25.718031   22815 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:25.718098   22815 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:25.718119   22815 main.go:141] libmachine: Making call to close driver server
I0721 23:40:25.718131   22815 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:25.718057   22815 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:25.718421   22815 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:25.718445   22815 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:25.718475   22815 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135358 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-135358  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-135358  | dd97a727eb4f9 | 1.47MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/minikube-local-cache-test     | functional-135358  | 54ffb02981749 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135358 image ls --format table --alsologtostderr:
I0721 23:40:33.002976   23002 out.go:291] Setting OutFile to fd 1 ...
I0721 23:40:33.003077   23002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:33.003085   23002 out.go:304] Setting ErrFile to fd 2...
I0721 23:40:33.003090   23002 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:33.003251   23002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
I0721 23:40:33.003761   23002 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:33.003881   23002 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:33.004208   23002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:33.004246   23002 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:33.018367   23002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
I0721 23:40:33.018876   23002 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:33.019443   23002 main.go:141] libmachine: Using API Version  1
I0721 23:40:33.019466   23002 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:33.019761   23002 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:33.019920   23002 main.go:141] libmachine: (functional-135358) Calling .GetState
I0721 23:40:33.021489   23002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:33.021525   23002 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:33.035928   23002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43913
I0721 23:40:33.036323   23002 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:33.036786   23002 main.go:141] libmachine: Using API Version  1
I0721 23:40:33.036808   23002 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:33.037079   23002 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:33.037221   23002 main.go:141] libmachine: (functional-135358) Calling .DriverName
I0721 23:40:33.037404   23002 ssh_runner.go:195] Run: systemctl --version
I0721 23:40:33.037426   23002 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
I0721 23:40:33.039717   23002 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:33.040073   23002 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
I0721 23:40:33.040097   23002 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:33.040253   23002 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
I0721 23:40:33.040426   23002 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
I0721 23:40:33.040612   23002 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
I0721 23:40:33.040764   23002 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
I0721 23:40:33.116689   23002 ssh_runner.go:195] Run: sudo crictl images --output json
I0721 23:40:33.153248   23002 main.go:141] libmachine: Making call to close driver server
I0721 23:40:33.153268   23002 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:33.153504   23002 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:33.153522   23002 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:33.153535   23002 main.go:141] libmachine: Making call to close driver server
I0721 23:40:33.153543   23002 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:33.153550   23002 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:33.153763   23002 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:33.153776   23002 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:33.153797   23002 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135358 image ls --format json --alsologtostderr:
[{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io
/library/mysql:5.7"],"size":"519571821"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee
6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-135358"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b
0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"dd97a727eb4f95ee7caf9ccc8a665c05cb061918df11e0887f6a75df6a2adfc2","repoDigests":["localhost/my-image@sha256:173f1db5c0edddac9e7d2bc8fd2c8fd3636359273f76762ae76d798d05947b73"],"repoTags":["localhost/my-image:functional-135358"],"size":"1468599"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859
475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},
{"id":"0e3c6372f90f19106eaea85a5f63ae671b72dd688e344b2788cd1d5e6f5c103f","repoDigests":["docker.io/library/667b10b5d566131cbe34eeffb0a05e5110218867fc3669358ac3e9451effe3d3-tmp@sha256:df1499379e9b33fde248d9983ef603a6f60cb3c3441dadea5543dc671eec8cee"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size"
:"31470524"},{"id":"54ffb0298174949cef155a12ac3f9272510d6c099cca579a2d6bea70d3a8e1ce","repoDigests":["localhost/minikube-local-cache-test@sha256:d092cffd30c383ec9d8dc9aa23bb9873f0a3e8457346a49dfc03c81f3b477c1f"],"repoTags":["localhost/minikube-local-cache-test:functional-135358"],"size":"3330"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size"
:"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135358 image ls --format json --alsologtostderr:
I0721 23:40:32.754953   22978 out.go:291] Setting OutFile to fd 1 ...
I0721 23:40:32.755213   22978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:32.755222   22978 out.go:304] Setting ErrFile to fd 2...
I0721 23:40:32.755226   22978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:32.755376   22978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
I0721 23:40:32.755913   22978 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:32.756015   22978 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:32.756394   22978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:32.756438   22978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:32.770950   22978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
I0721 23:40:32.771365   22978 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:32.771893   22978 main.go:141] libmachine: Using API Version  1
I0721 23:40:32.771912   22978 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:32.772252   22978 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:32.772439   22978 main.go:141] libmachine: (functional-135358) Calling .GetState
I0721 23:40:32.774015   22978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:32.774059   22978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:32.788193   22978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
I0721 23:40:32.788591   22978 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:32.789018   22978 main.go:141] libmachine: Using API Version  1
I0721 23:40:32.789041   22978 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:32.789303   22978 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:32.789478   22978 main.go:141] libmachine: (functional-135358) Calling .DriverName
I0721 23:40:32.789678   22978 ssh_runner.go:195] Run: systemctl --version
I0721 23:40:32.789702   22978 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
I0721 23:40:32.792176   22978 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:32.792590   22978 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
I0721 23:40:32.792623   22978 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:32.792762   22978 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
I0721 23:40:32.792931   22978 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
I0721 23:40:32.793072   22978 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
I0721 23:40:32.793203   22978 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
I0721 23:40:32.874811   22978 ssh_runner.go:195] Run: sudo crictl images --output json
I0721 23:40:32.959765   22978 main.go:141] libmachine: Making call to close driver server
I0721 23:40:32.959783   22978 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:32.960038   22978 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:32.960090   22978 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:32.960101   22978 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:32.960114   22978 main.go:141] libmachine: Making call to close driver server
I0721 23:40:32.960135   22978 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:32.960383   22978 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:32.960443   22978 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:32.960470   22978 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135358 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-135358
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 54ffb0298174949cef155a12ac3f9272510d6c099cca579a2d6bea70d3a8e1ce
repoDigests:
- localhost/minikube-local-cache-test@sha256:d092cffd30c383ec9d8dc9aa23bb9873f0a3e8457346a49dfc03c81f3b477c1f
repoTags:
- localhost/minikube-local-cache-test:functional-135358
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135358 image ls --format yaml --alsologtostderr:
I0721 23:40:25.789565   22840 out.go:291] Setting OutFile to fd 1 ...
I0721 23:40:25.789673   22840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:25.789683   22840 out.go:304] Setting ErrFile to fd 2...
I0721 23:40:25.789687   22840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:25.789872   22840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
I0721 23:40:25.790433   22840 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:25.790525   22840 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:25.790922   22840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:25.790970   22840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:25.805391   22840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
I0721 23:40:25.805886   22840 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:25.806597   22840 main.go:141] libmachine: Using API Version  1
I0721 23:40:25.806651   22840 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:25.806972   22840 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:25.807159   22840 main.go:141] libmachine: (functional-135358) Calling .GetState
I0721 23:40:25.808918   22840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:25.808949   22840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:25.826425   22840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
I0721 23:40:25.826828   22840 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:25.827341   22840 main.go:141] libmachine: Using API Version  1
I0721 23:40:25.827360   22840 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:25.827705   22840 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:25.827898   22840 main.go:141] libmachine: (functional-135358) Calling .DriverName
I0721 23:40:25.828114   22840 ssh_runner.go:195] Run: systemctl --version
I0721 23:40:25.828146   22840 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
I0721 23:40:25.831260   22840 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:25.832487   22840 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
I0721 23:40:25.832508   22840 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:25.833183   22840 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
I0721 23:40:25.833357   22840 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
I0721 23:40:25.833500   22840 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
I0721 23:40:25.833645   22840 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
I0721 23:40:25.908581   22840 ssh_runner.go:195] Run: sudo crictl images --output json
I0721 23:40:25.945872   22840 main.go:141] libmachine: Making call to close driver server
I0721 23:40:25.945887   22840 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:25.946155   22840 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:25.946184   22840 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:25.946212   22840 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:25.946283   22840 main.go:141] libmachine: Making call to close driver server
I0721 23:40:25.946297   22840 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:25.946521   22840 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:25.946537   22840 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:25.946550   22840 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135358 ssh pgrep buildkitd: exit status 1 (176.487428ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image build -t localhost/my-image:functional-135358 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 image build -t localhost/my-image:functional-135358 testdata/build --alsologtostderr: (6.347774623s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135358 image build -t localhost/my-image:functional-135358 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0e3c6372f90
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-135358
--> dd97a727eb4
Successfully tagged localhost/my-image:functional-135358
dd97a727eb4f95ee7caf9ccc8a665c05cb061918df11e0887f6a75df6a2adfc2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135358 image build -t localhost/my-image:functional-135358 testdata/build --alsologtostderr:
I0721 23:40:26.174549   22893 out.go:291] Setting OutFile to fd 1 ...
I0721 23:40:26.174732   22893 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:26.174742   22893 out.go:304] Setting ErrFile to fd 2...
I0721 23:40:26.174747   22893 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 23:40:26.174902   22893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
I0721 23:40:26.175471   22893 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:26.176053   22893 config.go:182] Loaded profile config "functional-135358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0721 23:40:26.176436   22893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:26.176477   22893 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:26.191026   22893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
I0721 23:40:26.191473   22893 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:26.192043   22893 main.go:141] libmachine: Using API Version  1
I0721 23:40:26.192065   22893 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:26.192416   22893 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:26.192635   22893 main.go:141] libmachine: (functional-135358) Calling .GetState
I0721 23:40:26.194405   22893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0721 23:40:26.194446   22893 main.go:141] libmachine: Launching plugin server for driver kvm2
I0721 23:40:26.209942   22893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
I0721 23:40:26.210361   22893 main.go:141] libmachine: () Calling .GetVersion
I0721 23:40:26.210810   22893 main.go:141] libmachine: Using API Version  1
I0721 23:40:26.210829   22893 main.go:141] libmachine: () Calling .SetConfigRaw
I0721 23:40:26.211078   22893 main.go:141] libmachine: () Calling .GetMachineName
I0721 23:40:26.211240   22893 main.go:141] libmachine: (functional-135358) Calling .DriverName
I0721 23:40:26.211427   22893 ssh_runner.go:195] Run: systemctl --version
I0721 23:40:26.211466   22893 main.go:141] libmachine: (functional-135358) Calling .GetSSHHostname
I0721 23:40:26.213770   22893 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:26.214185   22893 main.go:141] libmachine: (functional-135358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:90:0c", ip: ""} in network mk-functional-135358: {Iface:virbr1 ExpiryTime:2024-07-22 00:37:23 +0000 UTC Type:0 Mac:52:54:00:e5:90:0c Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:functional-135358 Clientid:01:52:54:00:e5:90:0c}
I0721 23:40:26.214242   22893 main.go:141] libmachine: (functional-135358) DBG | domain functional-135358 has defined IP address 192.168.39.121 and MAC address 52:54:00:e5:90:0c in network mk-functional-135358
I0721 23:40:26.214315   22893 main.go:141] libmachine: (functional-135358) Calling .GetSSHPort
I0721 23:40:26.214523   22893 main.go:141] libmachine: (functional-135358) Calling .GetSSHKeyPath
I0721 23:40:26.214693   22893 main.go:141] libmachine: (functional-135358) Calling .GetSSHUsername
I0721 23:40:26.214878   22893 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/functional-135358/id_rsa Username:docker}
I0721 23:40:26.296635   22893 build_images.go:161] Building image from path: /tmp/build.3919621948.tar
I0721 23:40:26.296704   22893 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0721 23:40:26.309497   22893 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3919621948.tar
I0721 23:40:26.313772   22893 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3919621948.tar: stat -c "%s %y" /var/lib/minikube/build/build.3919621948.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3919621948.tar': No such file or directory
I0721 23:40:26.313805   22893 ssh_runner.go:362] scp /tmp/build.3919621948.tar --> /var/lib/minikube/build/build.3919621948.tar (3072 bytes)
I0721 23:40:26.339387   22893 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3919621948
I0721 23:40:26.348429   22893 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3919621948 -xf /var/lib/minikube/build/build.3919621948.tar
I0721 23:40:26.357538   22893 crio.go:315] Building image: /var/lib/minikube/build/build.3919621948
I0721 23:40:26.357610   22893 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-135358 /var/lib/minikube/build/build.3919621948 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0721 23:40:32.451257   22893 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-135358 /var/lib/minikube/build/build.3919621948 --cgroup-manager=cgroupfs: (6.093608943s)
I0721 23:40:32.451357   22893 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3919621948
I0721 23:40:32.461485   22893 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3919621948.tar
I0721 23:40:32.470884   22893 build_images.go:217] Built localhost/my-image:functional-135358 from /tmp/build.3919621948.tar
I0721 23:40:32.470922   22893 build_images.go:133] succeeded building to: functional-135358
I0721 23:40:32.470929   22893 build_images.go:134] failed building to: 
I0721 23:40:32.470952   22893 main.go:141] libmachine: Making call to close driver server
I0721 23:40:32.470966   22893 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:32.471271   22893 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:32.471285   22893 main.go:141] libmachine: Making call to close connection to plugin binary
I0721 23:40:32.471293   22893 main.go:141] libmachine: Making call to close driver server
I0721 23:40:32.471305   22893 main.go:141] libmachine: (functional-135358) Calling .Close
I0721 23:40:32.471895   22893 main.go:141] libmachine: Successfully made call to close driver server
I0721 23:40:32.471906   22893 main.go:141] libmachine: (functional-135358) DBG | Closing plugin on server side
I0721 23:40:32.471912   22893 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.720174976s)
functional_test.go:346: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-135358
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image load --daemon kicbase/echo-server:functional-135358 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 image load --daemon kicbase/echo-server:functional-135358 --alsologtostderr: (2.488390399s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image load --daemon kicbase/echo-server:functional-135358 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-135358
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image load --daemon kicbase/echo-server:functional-135358 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image save kicbase/echo-server:functional-135358 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-135358 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.278402884s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-135358 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-135358
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-135358
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-135358
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-564251 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0721 23:42:54.282759   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:43:21.970197   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-564251 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.485357517s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-564251 -- rollout status deployment/busybox: (3.891453374s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-2jrmb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-s2cqd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-tvjh7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-2jrmb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-s2cqd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-tvjh7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-2jrmb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-s2cqd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-tvjh7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-2jrmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-2jrmb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-s2cqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-s2cqd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-tvjh7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-564251 -- exec busybox-fc5497c4f-tvjh7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-564251 -v=7 --alsologtostderr
E0721 23:44:55.172939   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.178244   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.188508   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.208776   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.249079   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.329391   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.490461   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:55.810872   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:56.451592   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0721 23:44:57.731837   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-564251 -v=7 --alsologtostderr: (56.061038778s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-564251 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp testdata/cp-test.txt ha-564251:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test.txt"
E0721 23:45:00.292661   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251:/home/docker/cp-test.txt ha-564251-m02:/home/docker/cp-test_ha-564251_ha-564251-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test_ha-564251_ha-564251-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251:/home/docker/cp-test.txt ha-564251-m03:/home/docker/cp-test_ha-564251_ha-564251-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test_ha-564251_ha-564251-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251:/home/docker/cp-test.txt ha-564251-m04:/home/docker/cp-test_ha-564251_ha-564251-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test_ha-564251_ha-564251-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp testdata/cp-test.txt ha-564251-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m02:/home/docker/cp-test.txt ha-564251:/home/docker/cp-test_ha-564251-m02_ha-564251.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test_ha-564251-m02_ha-564251.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m02:/home/docker/cp-test.txt ha-564251-m03:/home/docker/cp-test_ha-564251-m02_ha-564251-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test_ha-564251-m02_ha-564251-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m02:/home/docker/cp-test.txt ha-564251-m04:/home/docker/cp-test_ha-564251-m02_ha-564251-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test.txt"
E0721 23:45:05.413520   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test_ha-564251-m02_ha-564251-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp testdata/cp-test.txt ha-564251-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt ha-564251:/home/docker/cp-test_ha-564251-m03_ha-564251.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test_ha-564251-m03_ha-564251.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt ha-564251-m02:/home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test_ha-564251-m03_ha-564251-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m03:/home/docker/cp-test.txt ha-564251-m04:/home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test_ha-564251-m03_ha-564251-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp testdata/cp-test.txt ha-564251-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1435459431/001/cp-test_ha-564251-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt ha-564251:/home/docker/cp-test_ha-564251-m04_ha-564251.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251 "sudo cat /home/docker/cp-test_ha-564251-m04_ha-564251.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt ha-564251-m02:/home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m02 "sudo cat /home/docker/cp-test_ha-564251-m04_ha-564251-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 cp ha-564251-m04:/home/docker/cp-test.txt ha-564251-m03:/home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 ssh -n ha-564251-m03 "sudo cat /home/docker/cp-test_ha-564251-m04_ha-564251-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.466041315s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-564251 node delete m03 -v=7 --alsologtostderr: (16.515473587s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (348.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-564251 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0721 23:57:54.282721   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0721 23:59:55.172451   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:01:18.217825   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:02:54.282634   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-564251 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m47.849734054s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (348.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-564251 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-564251 --control-plane -v=7 --alsologtostderr: (1m14.776500216s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-564251 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-388215 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0722 00:04:55.172494   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-388215 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m31.699729831s)
--- PASS: TestJSONOutput/start/Command (91.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-388215 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-388215 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.57s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-388215 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-388215 --output=json --user=testUser: (6.573703143s)
--- PASS: TestJSONOutput/stop/Command (6.57s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-067377 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-067377 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.486421ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31f75bee-b01e-4002-95a9-a71421cdba7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-067377] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe3f1988-21fc-411c-bd3b-15f26259adbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"78305a5d-324c-4222-b12d-f2ead1589bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae3c37a8-147e-4da7-a890-060708d0e5a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig"}}
	{"specversion":"1.0","id":"56b3c4e1-edbe-4fd0-ae35-72c271b93cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube"}}
	{"specversion":"1.0","id":"166cef63-e12a-4d28-bf11-b8f1b846d2e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0930b6ce-9bba-420a-9028-4855709603a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8e2a3dfb-da62-4177-a953-e9d025a20c1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-067377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-067377
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-202500 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-202500 --driver=kvm2  --container-runtime=crio: (43.292430003s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-205324 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-205324 --driver=kvm2  --container-runtime=crio: (42.25568632s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-202500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-205324
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-205324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-205324
helpers_test.go:175: Cleaning up "first-202500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-202500
--- PASS: TestMinikubeProfile (88.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-894808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0722 00:07:54.282762   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-894808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.811670491s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-894808 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-894808 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-913496 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-913496 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.55573495s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-894808 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-913496
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-913496: (1.262907022s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-913496
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-913496: (22.754473626s)
--- PASS: TestMountStart/serial/RestartStopped (23.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-913496 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-332426 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 00:09:55.172941   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-332426 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.006233434s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-332426 -- rollout status deployment/busybox: (3.822526888s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-d4fqv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-mccbm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-d4fqv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-mccbm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-d4fqv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-mccbm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-d4fqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-d4fqv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-mccbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-332426 -- exec busybox-fc5497c4f-mccbm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-332426 -v 3 --alsologtostderr
E0722 00:10:57.331831   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-332426 -v 3 --alsologtostderr: (47.41543661s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-332426 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp testdata/cp-test.txt multinode-332426:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426:/home/docker/cp-test.txt multinode-332426-m02:/home/docker/cp-test_multinode-332426_multinode-332426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test_multinode-332426_multinode-332426-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426:/home/docker/cp-test.txt multinode-332426-m03:/home/docker/cp-test_multinode-332426_multinode-332426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test_multinode-332426_multinode-332426-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp testdata/cp-test.txt multinode-332426-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt multinode-332426:/home/docker/cp-test_multinode-332426-m02_multinode-332426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test_multinode-332426-m02_multinode-332426.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m02:/home/docker/cp-test.txt multinode-332426-m03:/home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test_multinode-332426-m02_multinode-332426-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp testdata/cp-test.txt multinode-332426-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372010046/001/cp-test_multinode-332426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt multinode-332426:/home/docker/cp-test_multinode-332426-m03_multinode-332426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426 "sudo cat /home/docker/cp-test_multinode-332426-m03_multinode-332426.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 cp multinode-332426-m03:/home/docker/cp-test.txt multinode-332426-m02:/home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 ssh -n multinode-332426-m02 "sudo cat /home/docker/cp-test_multinode-332426-m03_multinode-332426-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-332426 node stop m03: (1.356944902s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-332426 status: exit status 7 (413.232355ms)

                                                
                                                
-- stdout --
	multinode-332426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-332426-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-332426-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr: exit status 7 (404.955006ms)

                                                
                                                
-- stdout --
	multinode-332426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-332426-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-332426-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:11:50.248859   40338 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:11:50.249112   40338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:11:50.249122   40338 out.go:304] Setting ErrFile to fd 2...
	I0722 00:11:50.249128   40338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:11:50.249306   40338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:11:50.249483   40338 out.go:298] Setting JSON to false
	I0722 00:11:50.249516   40338 mustload.go:65] Loading cluster: multinode-332426
	I0722 00:11:50.249611   40338 notify.go:220] Checking for updates...
	I0722 00:11:50.249912   40338 config.go:182] Loaded profile config "multinode-332426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:11:50.249927   40338 status.go:255] checking status of multinode-332426 ...
	I0722 00:11:50.250294   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.250341   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.268975   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0722 00:11:50.269318   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.269819   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.269840   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.270134   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.270311   40338 main.go:141] libmachine: (multinode-332426) Calling .GetState
	I0722 00:11:50.271876   40338 status.go:330] multinode-332426 host status = "Running" (err=<nil>)
	I0722 00:11:50.271895   40338 host.go:66] Checking if "multinode-332426" exists ...
	I0722 00:11:50.272172   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.272204   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.286753   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0722 00:11:50.287105   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.287524   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.287545   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.287910   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.288130   40338 main.go:141] libmachine: (multinode-332426) Calling .GetIP
	I0722 00:11:50.290879   40338 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:11:50.291252   40338 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:11:50.291286   40338 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:11:50.291441   40338 host.go:66] Checking if "multinode-332426" exists ...
	I0722 00:11:50.291766   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.291803   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.306679   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0722 00:11:50.307127   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.307589   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.307607   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.307914   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.308156   40338 main.go:141] libmachine: (multinode-332426) Calling .DriverName
	I0722 00:11:50.308390   40338 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:11:50.308416   40338 main.go:141] libmachine: (multinode-332426) Calling .GetSSHHostname
	I0722 00:11:50.311059   40338 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:11:50.311463   40338 main.go:141] libmachine: (multinode-332426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:43:f5", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:09:06 +0000 UTC Type:0 Mac:52:54:00:41:43:f5 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-332426 Clientid:01:52:54:00:41:43:f5}
	I0722 00:11:50.311487   40338 main.go:141] libmachine: (multinode-332426) DBG | domain multinode-332426 has defined IP address 192.168.39.67 and MAC address 52:54:00:41:43:f5 in network mk-multinode-332426
	I0722 00:11:50.311617   40338 main.go:141] libmachine: (multinode-332426) Calling .GetSSHPort
	I0722 00:11:50.311788   40338 main.go:141] libmachine: (multinode-332426) Calling .GetSSHKeyPath
	I0722 00:11:50.311933   40338 main.go:141] libmachine: (multinode-332426) Calling .GetSSHUsername
	I0722 00:11:50.312063   40338 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426/id_rsa Username:docker}
	I0722 00:11:50.389996   40338 ssh_runner.go:195] Run: systemctl --version
	I0722 00:11:50.396018   40338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:11:50.409673   40338 kubeconfig.go:125] found "multinode-332426" server: "https://192.168.39.67:8443"
	I0722 00:11:50.409696   40338 api_server.go:166] Checking apiserver status ...
	I0722 00:11:50.409722   40338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:11:50.422559   40338 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0722 00:11:50.431320   40338 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 00:11:50.431359   40338 ssh_runner.go:195] Run: ls
	I0722 00:11:50.435256   40338 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0722 00:11:50.439133   40338 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0722 00:11:50.439156   40338 status.go:422] multinode-332426 apiserver status = Running (err=<nil>)
	I0722 00:11:50.439165   40338 status.go:257] multinode-332426 status: &{Name:multinode-332426 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:11:50.439183   40338 status.go:255] checking status of multinode-332426-m02 ...
	I0722 00:11:50.439464   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.439498   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.454439   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0722 00:11:50.454862   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.455310   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.455331   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.455592   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.455760   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetState
	I0722 00:11:50.457217   40338 status.go:330] multinode-332426-m02 host status = "Running" (err=<nil>)
	I0722 00:11:50.457230   40338 host.go:66] Checking if "multinode-332426-m02" exists ...
	I0722 00:11:50.457571   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.457604   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.472449   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0722 00:11:50.472879   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.473352   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.473389   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.473715   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.473903   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetIP
	I0722 00:11:50.476666   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | domain multinode-332426-m02 has defined MAC address 52:54:00:6c:0c:9c in network mk-multinode-332426
	I0722 00:11:50.477024   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:0c:9c", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:10:14 +0000 UTC Type:0 Mac:52:54:00:6c:0c:9c Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-332426-m02 Clientid:01:52:54:00:6c:0c:9c}
	I0722 00:11:50.477052   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | domain multinode-332426-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:6c:0c:9c in network mk-multinode-332426
	I0722 00:11:50.477178   40338 host.go:66] Checking if "multinode-332426-m02" exists ...
	I0722 00:11:50.477492   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.477527   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.491847   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0722 00:11:50.492177   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.492574   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.492588   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.492842   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.493007   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .DriverName
	I0722 00:11:50.493178   40338 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 00:11:50.493197   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetSSHHostname
	I0722 00:11:50.495415   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | domain multinode-332426-m02 has defined MAC address 52:54:00:6c:0c:9c in network mk-multinode-332426
	I0722 00:11:50.495765   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:0c:9c", ip: ""} in network mk-multinode-332426: {Iface:virbr1 ExpiryTime:2024-07-22 01:10:14 +0000 UTC Type:0 Mac:52:54:00:6c:0c:9c Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-332426-m02 Clientid:01:52:54:00:6c:0c:9c}
	I0722 00:11:50.495792   40338 main.go:141] libmachine: (multinode-332426-m02) DBG | domain multinode-332426-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:6c:0c:9c in network mk-multinode-332426
	I0722 00:11:50.495930   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetSSHPort
	I0722 00:11:50.496083   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetSSHKeyPath
	I0722 00:11:50.496237   40338 main.go:141] libmachine: (multinode-332426-m02) Calling .GetSSHUsername
	I0722 00:11:50.496374   40338 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-5094/.minikube/machines/multinode-332426-m02/id_rsa Username:docker}
	I0722 00:11:50.581745   40338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:11:50.595427   40338 status.go:257] multinode-332426-m02 status: &{Name:multinode-332426-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 00:11:50.595459   40338 status.go:255] checking status of multinode-332426-m03 ...
	I0722 00:11:50.595767   40338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0722 00:11:50.595809   40338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0722 00:11:50.611013   40338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0722 00:11:50.611439   40338 main.go:141] libmachine: () Calling .GetVersion
	I0722 00:11:50.611877   40338 main.go:141] libmachine: Using API Version  1
	I0722 00:11:50.611898   40338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 00:11:50.612246   40338 main.go:141] libmachine: () Calling .GetMachineName
	I0722 00:11:50.612464   40338 main.go:141] libmachine: (multinode-332426-m03) Calling .GetState
	I0722 00:11:50.614160   40338 status.go:330] multinode-332426-m03 host status = "Stopped" (err=<nil>)
	I0722 00:11:50.614174   40338 status.go:343] host is not running, skipping remaining checks
	I0722 00:11:50.614179   40338 status.go:257] multinode-332426-m03 status: &{Name:multinode-332426-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-332426 node start m03 -v=7 --alsologtostderr: (37.265199661s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 node delete m03
E0722 00:17:54.282374   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-332426 node delete m03: (1.844478217s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (174.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-332426 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0722 00:22:54.282393   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-332426 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m54.064729627s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-332426 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (174.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-332426
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-332426-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-332426-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (56.109275ms)

                                                
                                                
-- stdout --
	* [multinode-332426-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-332426-m02' is duplicated with machine name 'multinode-332426-m02' in profile 'multinode-332426'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-332426-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-332426-m03 --driver=kvm2  --container-runtime=crio: (40.590343264s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-332426
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-332426: exit status 80 (198.434343ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-332426 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-332426-m03 already exists in multinode-332426-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-332426-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.85s)

                                                
                                    
x
+
TestScheduledStopUnix (109.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-154657 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-154657 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.192386157s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-154657 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-154657 -n scheduled-stop-154657
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-154657 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-154657 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-154657 -n scheduled-stop-154657
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-154657
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-154657 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0722 00:29:55.172398   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-154657
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-154657: exit status 7 (64.133499ms)

                                                
                                                
-- stdout --
	scheduled-stop-154657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-154657 -n scheduled-stop-154657
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-154657 -n scheduled-stop-154657: exit status 7 (64.022743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-154657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-154657
--- PASS: TestScheduledStopUnix (109.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (178.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4257828111 start -p running-upgrade-012741 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0722 00:32:54.282746   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4257828111 start -p running-upgrade-012741 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m31.542827087s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-012741 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-012741 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.368031706s)
helpers_test.go:175: Cleaning up "running-upgrade-012741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-012741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-012741: (1.14886264s)
--- PASS: TestRunningBinaryUpgrade (178.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-280040 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-280040 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (97.411668ms)

                                                
                                                
-- stdout --
	* [false-280040] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 00:30:18.945818   48350 out.go:291] Setting OutFile to fd 1 ...
	I0722 00:30:18.945906   48350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:18.945914   48350 out.go:304] Setting ErrFile to fd 2...
	I0722 00:30:18.945918   48350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:30:18.946112   48350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-5094/.minikube/bin
	I0722 00:30:18.946688   48350 out.go:298] Setting JSON to false
	I0722 00:30:18.947549   48350 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4363,"bootTime":1721603856,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0722 00:30:18.947609   48350 start.go:139] virtualization: kvm guest
	I0722 00:30:18.949640   48350 out.go:177] * [false-280040] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0722 00:30:18.951140   48350 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:30:18.951145   48350 notify.go:220] Checking for updates...
	I0722 00:30:18.952533   48350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:30:18.953842   48350 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	I0722 00:30:18.955162   48350 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	I0722 00:30:18.956399   48350 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0722 00:30:18.957549   48350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:30:18.959010   48350 config.go:182] Loaded profile config "force-systemd-flag-959082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:30:18.959123   48350 config.go:182] Loaded profile config "kubernetes-upgrade-921436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0722 00:30:18.959228   48350 config.go:182] Loaded profile config "offline-crio-897769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0722 00:30:18.959326   48350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:30:18.995941   48350 out.go:177] * Using the kvm2 driver based on user configuration
	I0722 00:30:18.997147   48350 start.go:297] selected driver: kvm2
	I0722 00:30:18.997165   48350 start.go:901] validating driver "kvm2" against <nil>
	I0722 00:30:18.997176   48350 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:30:18.999094   48350 out.go:177] 
	W0722 00:30:19.000327   48350 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0722 00:30:19.001423   48350 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-280040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-280040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-280040"

                                                
                                                
----------------------- debugLogs end: false-280040 [took: 2.55726791s] --------------------------------
helpers_test.go:175: Cleaning up "false-280040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-280040
--- PASS: TestNetworkPlugins/group/false (2.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.26436743 start -p stopped-upgrade-897070 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.26436743 start -p stopped-upgrade-897070 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m8.91653523s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.26436743 -p stopped-upgrade-897070 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.26436743 -p stopped-upgrade-897070 stop: (1.458126616s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-897070 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-897070 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.175056246s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-897070
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (63.83671ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-302969] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-5094/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-5094/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-302969 --driver=kvm2  --container-runtime=crio
E0722 00:34:38.219295   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-302969 --driver=kvm2  --container-runtime=crio: (41.287492604s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-302969 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --driver=kvm2  --container-runtime=crio: (11.752610034s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-302969 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-302969 status -o json: exit status 2 (267.079173ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-302969","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-302969
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-302969: (1.13432948s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-302969 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.560033909s)
--- PASS: TestNoKubernetes/serial/Start (28.56s)

                                                
                                    
x
+
TestPause/serial/Start (73.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998383 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-998383 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m13.940777067s)
--- PASS: TestPause/serial/Start (73.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-302969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-302969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.118354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-302969
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-302969: (1.288490297s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-302969 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-302969 --driver=kvm2  --container-runtime=crio: (1m8.205580907s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.943269152s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998383 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-998383 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.442733349s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (61.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-302969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-302969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.814695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m46.888761532s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (146.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0722 00:37:54.282341   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m26.02870339s)
--- PASS: TestNetworkPlugins/group/calico/Start (146.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-998383 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-998383 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-998383 --output=json --layout=cluster: exit status 2 (245.979467ms)

                                                
                                                
-- stdout --
	{"Name":"pause-998383","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-998383","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-998383 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-998383 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c5vhb" [fa78fbb7-590a-4986-a6a1-ae6c8acd8913] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c5vhb" [fa78fbb7-590a-4986-a6a1-ae6c8acd8913] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003924973s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-998383 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-998383 --alsologtostderr -v=5: (1.13608755s)
--- PASS: TestPause/serial/DeletePaused (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m42.783434184s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.790986274s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ntdkw" [44ae88b6-0975-4203-977f-b7200ed0f667] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005121999s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2lndg" [619c7c3d-4fd4-4afb-bc39-25b88730fbbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2lndg" [619c7c3d-4fd4-4afb-bc39-25b88730fbbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004158671s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m22.366618615s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9hk4v" [e4967b8e-abff-4d38-9134-1a2ec7a8b9ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005647314s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5mdht" [28aa4c58-8b46-4f3a-aa4d-1f3ae4013847] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5mdht" [28aa4c58-8b46-4f3a-aa4d-1f3ae4013847] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005245819s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4dh72" [e2517310-810c-440f-b15b-509a1be26e20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4dh72" [e2517310-810c-440f-b15b-509a1be26e20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.008992706s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k6xfw" [b8b0608a-c1f5-4d11-a49a-3d23ead9a9f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 00:39:55.172959   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-k6xfw" [b8b0608a-c1f5-4d11-a49a-3d23ead9a9f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.012199718s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-280040 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.315829169s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (140.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-945581 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-945581 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m20.806067927s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (140.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m6qxg" [0014ffa5-6745-403b-b400-07e0afad5578] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004114336s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7q7s9" [2fd349d0-9699-4e0a-b62f-9acde949b689] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7q7s9" [2fd349d0-9699-4e0a-b62f-9acde949b689] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004235056s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-280040 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-280040 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tndv2" [fc2638e0-38e7-4af9-8eab-886c8d099202] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tndv2" [fc2638e0-38e7-4af9-8eab-886c8d099202] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004096716s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-280040 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-280040 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-214905 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-214905 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m3.159992028s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-590595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-590595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m4.66260142s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e148a81-2d79-4642-b4bb-9def1168e97d] Pending
helpers_test.go:344: "busybox" [2e148a81-2d79-4642-b4bb-9def1168e97d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e148a81-2d79-4642-b4bb-9def1168e97d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003275786s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-214905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-214905 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-590595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-590595 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-590595 --alsologtostderr -v=3: (10.313508707s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-945581 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f8dc257-685a-4c0e-9bab-ac5d6aa1ed3e] Pending
helpers_test.go:344: "busybox" [0f8dc257-685a-4c0e-9bab-ac5d6aa1ed3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f8dc257-685a-4c0e-9bab-ac5d6aa1ed3e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004342995s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-945581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-590595 -n newest-cni-590595
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-590595 -n newest-cni-590595: exit status 7 (63.734956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-590595 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-590595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0722 00:42:54.281909   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-590595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (33.938592887s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-590595 -n newest-cni-590595
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-945581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-945581 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-590595 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-590595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-590595 --alsologtostderr -v=1: (1.673112697s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-590595 -n newest-cni-590595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-590595 -n newest-cni-590595: exit status 2 (333.138702ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-590595 -n newest-cni-590595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-590595 -n newest-cni-590595: exit status 2 (333.266077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-590595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-590595 -n newest-cni-590595
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-590595 -n newest-cni-590595
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-360389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 00:43:42.266740   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:43:52.192572   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.197820   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.208084   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.228348   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.268609   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.348959   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.509417   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:52.830005   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:53.470515   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:54.751117   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:43:57.311606   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:44:02.432681   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:44:12.672829   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:44:17.334248   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0722 00:44:23.227822   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-360389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (57.263962452s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-360389 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c23b021a-f68e-40c7-ac17-1ec62007d59a] Pending
helpers_test.go:344: "busybox" [c23b021a-f68e-40c7-ac17-1ec62007d59a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0722 00:44:31.987206   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:31.992466   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.002669   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.022917   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.063171   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.143336   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.303843   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:32.624506   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:33.153984   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:44:33.265249   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c23b021a-f68e-40c7-ac17-1ec62007d59a] Running
E0722 00:44:34.545467   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:44:37.106133   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003759434s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-360389 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-360389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-360389 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (678.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-214905 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 00:45:12.947851   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:45:14.114657   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:45:15.370339   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-214905 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (11m18.228056811s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-214905 -n default-k8s-diff-port-214905
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (678.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (606.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-945581 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0722 00:45:35.850724   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:45:45.148211   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:45:51.032493   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.037787   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.048039   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.068314   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.108629   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.188996   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.349770   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:51.670407   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:52.310645   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:53.591244   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:45:53.908822   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:45:56.152090   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:46:01.272370   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:46:08.465878   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:46:10.763759   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:10.769029   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:10.779364   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:10.799691   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:10.839979   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:10.920376   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:11.080969   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:11.401892   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:11.513247   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:46:12.042065   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:13.322839   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:15.883458   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:46:16.810890   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-945581 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m6.58565197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-945581 -n no-preload-945581
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (606.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-366657 --alsologtostderr -v=3
E0722 00:46:21.004413   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-366657 --alsologtostderr -v=3: (3.283314658s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-366657 -n old-k8s-version-366657: exit status 7 (62.690064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-366657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (494.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-360389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0722 00:47:12.955040   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:47:15.829992   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:47:30.386684   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:47:32.686676   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:47:38.731923   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:47:54.282118   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0722 00:48:01.305612   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:48:28.989507   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:48:34.875436   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:48:52.192994   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:48:54.607029   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:49:19.876317   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:49:31.986848   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:49:46.543541   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:49:54.889546   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:49:55.172198   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:49:59.671747   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:50:14.227672   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:50:22.572374   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:50:51.032963   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:51:10.764423   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:51:18.219791   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
E0722 00:51:18.716540   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/flannel-280040/client.crt: no such file or directory
E0722 00:51:38.447507   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/bridge-280040/client.crt: no such file or directory
E0722 00:52:54.282586   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/addons-688294/client.crt: no such file or directory
E0722 00:53:01.305908   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/auto-280040/client.crt: no such file or directory
E0722 00:53:52.192569   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/kindnet-280040/client.crt: no such file or directory
E0722 00:54:31.987350   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/calico-280040/client.crt: no such file or directory
E0722 00:54:46.543866   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/custom-flannel-280040/client.crt: no such file or directory
E0722 00:54:54.889253   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/enable-default-cni-280040/client.crt: no such file or directory
E0722 00:54:55.172620   12263 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-5094/.minikube/profiles/functional-135358/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-360389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m13.780320679s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-360389 -n embed-certs-360389
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (494.06s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 2.74
269 TestNetworkPlugins/group/cilium 2.98
275 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-280040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-280040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-280040"

                                                
                                                
----------------------- debugLogs end: kubenet-280040 [took: 2.606437094s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-280040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-280040
--- SKIP: TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-280040 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-280040" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-280040

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-280040" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-280040"

                                                
                                                
----------------------- debugLogs end: cilium-280040 [took: 2.85108655s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-280040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-280040
--- SKIP: TestNetworkPlugins/group/cilium (2.98s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-934399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-934399
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard